00:00:00.001 Started by upstream project "autotest-per-patch" build number 132343 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.021 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.021 The recommended git tool is: git 00:00:00.021 using credential 00000000-0000-0000-0000-000000000002 00:00:00.023 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.041 Fetching changes from the remote Git repository 00:00:00.045 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.079 Using shallow fetch with depth 1 00:00:00.079 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.079 > git --version # timeout=10 00:00:00.109 > git --version # 'git version 2.39.2' 00:00:00.109 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.132 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.132 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.955 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.968 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.983 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.983 > git config core.sparsecheckout # timeout=10 00:00:04.999 > git read-tree -mu HEAD # timeout=10 00:00:05.017 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.048 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.048 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.159 [Pipeline] Start of Pipeline 00:00:05.171 [Pipeline] library 00:00:05.172 Loading library shm_lib@master 00:00:05.172 Library shm_lib@master is cached. Copying from home. 00:00:05.187 [Pipeline] node 00:00:05.199 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.201 [Pipeline] { 00:00:05.210 [Pipeline] catchError 00:00:05.212 [Pipeline] { 00:00:05.226 [Pipeline] wrap 00:00:05.233 [Pipeline] { 00:00:05.239 [Pipeline] stage 00:00:05.240 [Pipeline] { (Prologue) 00:00:05.430 [Pipeline] sh 00:00:05.719 + logger -p user.info -t JENKINS-CI 00:00:05.738 [Pipeline] echo 00:00:05.740 Node: CYP9 00:00:05.748 [Pipeline] sh 00:00:06.054 [Pipeline] setCustomBuildProperty 00:00:06.066 [Pipeline] echo 00:00:06.068 Cleanup processes 00:00:06.074 [Pipeline] sh 00:00:06.366 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.366 3188407 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.378 [Pipeline] sh 00:00:06.664 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.664 ++ grep -v 'sudo pgrep' 00:00:06.664 ++ awk '{print $1}' 00:00:06.664 + sudo kill -9 00:00:06.664 + true 00:00:06.688 [Pipeline] cleanWs 00:00:06.698 [WS-CLEANUP] Deleting project workspace... 00:00:06.698 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.704 [WS-CLEANUP] done 00:00:06.711 [Pipeline] setCustomBuildProperty 00:00:06.728 [Pipeline] sh 00:00:07.016 + sudo git config --global --replace-all safe.directory '*' 00:00:07.121 [Pipeline] httpRequest 00:00:07.489 [Pipeline] echo 00:00:07.491 Sorcerer 10.211.164.20 is alive 00:00:07.500 [Pipeline] retry 00:00:07.502 [Pipeline] { 00:00:07.514 [Pipeline] httpRequest 00:00:07.519 HttpMethod: GET 00:00:07.519 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.520 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.536 Response Code: HTTP/1.1 200 OK 00:00:07.537 Success: Status code 200 is in the accepted range: 200,404 00:00:07.537 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.396 [Pipeline] } 00:00:28.412 [Pipeline] // retry 00:00:28.418 [Pipeline] sh 00:00:28.702 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.717 [Pipeline] httpRequest 00:00:29.106 [Pipeline] echo 00:00:29.108 Sorcerer 10.211.164.20 is alive 00:00:29.117 [Pipeline] retry 00:00:29.119 [Pipeline] { 00:00:29.132 [Pipeline] httpRequest 00:00:29.137 HttpMethod: GET 00:00:29.138 URL: http://10.211.164.20/packages/spdk_9b64b1304a1110564887d506b0fb7b0ef65899c9.tar.gz 00:00:29.138 Sending request to url: http://10.211.164.20/packages/spdk_9b64b1304a1110564887d506b0fb7b0ef65899c9.tar.gz 00:00:29.155 Response Code: HTTP/1.1 200 OK 00:00:29.156 Success: Status code 200 is in the accepted range: 200,404 00:00:29.156 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_9b64b1304a1110564887d506b0fb7b0ef65899c9.tar.gz 00:01:23.683 [Pipeline] } 00:01:23.701 [Pipeline] // retry 00:01:23.710 [Pipeline] sh 00:01:23.998 + tar --no-same-owner -xf spdk_9b64b1304a1110564887d506b0fb7b0ef65899c9.tar.gz 00:01:27.315 [Pipeline] sh 00:01:27.601 + git -C spdk log --oneline -n5 00:01:27.601 9b64b1304 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:01:27.601 95f6a056e bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:01:27.602 a38267915 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:01:27.602 095307e93 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:01:27.602 3b3a1a596 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:01:27.614 [Pipeline] } 00:01:27.630 [Pipeline] // stage 00:01:27.639 [Pipeline] stage 00:01:27.642 [Pipeline] { (Prepare) 00:01:27.659 [Pipeline] writeFile 00:01:27.674 [Pipeline] sh 00:01:27.963 + logger -p user.info -t JENKINS-CI 00:01:27.979 [Pipeline] sh 00:01:28.267 + logger -p user.info -t JENKINS-CI 00:01:28.281 [Pipeline] sh 00:01:28.573 + cat autorun-spdk.conf 00:01:28.573 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.573 SPDK_TEST_NVMF=1 00:01:28.573 SPDK_TEST_NVME_CLI=1 00:01:28.573 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.573 SPDK_TEST_NVMF_NICS=e810 00:01:28.573 SPDK_TEST_VFIOUSER=1 00:01:28.573 SPDK_RUN_UBSAN=1 00:01:28.573 NET_TYPE=phy 00:01:28.582 RUN_NIGHTLY=0 00:01:28.587 [Pipeline] readFile 00:01:28.612 [Pipeline] withEnv 00:01:28.616 [Pipeline] { 00:01:28.629 [Pipeline] sh 00:01:28.918 + set -ex 00:01:28.918 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:28.918 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.918 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.918 ++ SPDK_TEST_NVMF=1 00:01:28.918 ++ SPDK_TEST_NVME_CLI=1 00:01:28.918 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.918 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.918 ++ SPDK_TEST_VFIOUSER=1 00:01:28.918 ++ SPDK_RUN_UBSAN=1 00:01:28.918 ++ NET_TYPE=phy 00:01:28.918 ++ RUN_NIGHTLY=0 00:01:28.918 + case $SPDK_TEST_NVMF_NICS in 00:01:28.918 + DRIVERS=ice 00:01:28.918 + [[ tcp == \r\d\m\a ]] 00:01:28.918 + [[ -n ice ]] 00:01:28.918 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:28.918 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:28.918 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:28.918 rmmod: ERROR: Module irdma is not currently loaded 00:01:28.918 rmmod: ERROR: Module i40iw is not currently loaded 00:01:28.918 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:28.918 + true 00:01:28.918 + for D in $DRIVERS 00:01:28.918 + sudo modprobe ice 00:01:28.918 + exit 0 00:01:28.928 [Pipeline] } 00:01:28.943 [Pipeline] // withEnv 00:01:28.948 [Pipeline] } 00:01:28.963 [Pipeline] // stage 00:01:28.972 [Pipeline] catchError 00:01:28.974 [Pipeline] { 00:01:28.991 [Pipeline] timeout 00:01:28.991 Timeout set to expire in 1 hr 0 min 00:01:28.993 [Pipeline] { 00:01:29.008 [Pipeline] stage 00:01:29.010 [Pipeline] { (Tests) 00:01:29.026 [Pipeline] sh 00:01:29.315 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:29.315 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:29.315 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:29.315 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:29.315 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.315 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:29.315 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:29.315 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:29.315 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:29.315 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:29.316 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:29.316 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:29.316 + source /etc/os-release 00:01:29.316 ++ NAME='Fedora Linux' 00:01:29.316 ++ VERSION='39 (Cloud Edition)' 00:01:29.316 ++ ID=fedora 00:01:29.316 ++ VERSION_ID=39 00:01:29.316 ++ VERSION_CODENAME= 00:01:29.316 ++ PLATFORM_ID=platform:f39 00:01:29.316 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:29.316 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:29.316 ++ LOGO=fedora-logo-icon 00:01:29.316 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:29.316 ++ HOME_URL=https://fedoraproject.org/ 00:01:29.316 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:29.316 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:29.316 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:29.316 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:29.316 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:29.316 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:29.316 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:29.316 ++ SUPPORT_END=2024-11-12 00:01:29.316 ++ VARIANT='Cloud Edition' 00:01:29.316 ++ VARIANT_ID=cloud 00:01:29.316 + uname -a 00:01:29.316 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:29.316 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:32.619 Hugepages 00:01:32.619 node hugesize free / total 00:01:32.619 node0 1048576kB 0 / 0 00:01:32.619 node0 2048kB 0 / 0 00:01:32.619 node1 1048576kB 0 / 0 00:01:32.619 node1 2048kB 0 / 0 00:01:32.619 00:01:32.619 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:32.619 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:32.619 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:32.619 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:32.619 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:32.619 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:32.619 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:32.619 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:32.619 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:32.619 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:32.619 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:32.619 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:32.619 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:32.619 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:32.619 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:32.619 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:32.619 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:32.619 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:32.619 + rm -f /tmp/spdk-ld-path 00:01:32.619 + source autorun-spdk.conf 00:01:32.619 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.619 ++ SPDK_TEST_NVMF=1 00:01:32.619 ++ SPDK_TEST_NVME_CLI=1 00:01:32.619 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.619 ++ SPDK_TEST_NVMF_NICS=e810 00:01:32.619 ++ SPDK_TEST_VFIOUSER=1 00:01:32.619 ++ SPDK_RUN_UBSAN=1 00:01:32.619 ++ NET_TYPE=phy 00:01:32.619 ++ RUN_NIGHTLY=0 00:01:32.619 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:32.619 + [[ -n '' ]] 00:01:32.619 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.620 + for M in /var/spdk/build-*-manifest.txt 00:01:32.620 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:32.620 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.620 + for M in /var/spdk/build-*-manifest.txt 00:01:32.620 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:32.620 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.620 + for M in /var/spdk/build-*-manifest.txt 00:01:32.620 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:32.620 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.620 ++ uname 00:01:32.620 + [[ Linux == \L\i\n\u\x ]] 00:01:32.620 + sudo dmesg -T 00:01:32.620 + sudo dmesg --clear 00:01:32.620 + dmesg_pid=3189387 00:01:32.620 + [[ Fedora Linux == FreeBSD ]] 00:01:32.620 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.620 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.620 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:32.620 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:32.620 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:32.620 + [[ -x /usr/src/fio-static/fio ]] 00:01:32.620 + export FIO_BIN=/usr/src/fio-static/fio 00:01:32.620 + FIO_BIN=/usr/src/fio-static/fio 00:01:32.620 + sudo dmesg -Tw 00:01:32.620 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:32.620 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:32.620 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:32.620 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.620 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.620 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:32.620 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.620 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.620 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.620 06:59:54 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:32.620 06:59:54 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.620 06:59:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.620 06:59:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:32.620 06:59:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:32.620 06:59:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.620 06:59:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:32.620 06:59:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:32.620 06:59:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:32.620 06:59:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:32.620 06:59:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:32.620 06:59:54 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:32.620 06:59:54 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.882 06:59:54 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:32.882 06:59:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:32.882 06:59:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:32.882 06:59:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:32.882 06:59:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:32.882 06:59:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:32.882 06:59:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.882 06:59:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.882 06:59:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.882 06:59:54 -- paths/export.sh@5 -- $ export PATH 00:01:32.882 06:59:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.882 06:59:54 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:32.882 06:59:54 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:32.882 06:59:54 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732082394.XXXXXX 00:01:32.882 06:59:54 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732082394.fX9hVI 00:01:32.882 06:59:54 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:32.882 06:59:54 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:32.882 06:59:54 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:32.882 06:59:54 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:32.882 06:59:54 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:32.882 06:59:54 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:32.882 06:59:54 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:32.882 06:59:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.882 06:59:54 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:32.882 06:59:54 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:32.882 06:59:54 -- pm/common@17 -- $ local monitor 00:01:32.882 06:59:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.882 06:59:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.882 06:59:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.882 06:59:54 -- pm/common@21 -- $ date +%s 00:01:32.882 06:59:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.882 06:59:54 -- pm/common@25 -- $ sleep 1 00:01:32.882 06:59:54 -- pm/common@21 -- $ date +%s 00:01:32.882 06:59:54 -- pm/common@21 -- $ date +%s 00:01:32.882 06:59:54 -- pm/common@21 -- $ date +%s 00:01:32.882 06:59:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082394 00:01:32.882 06:59:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082394 00:01:32.882 06:59:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082394 00:01:32.882 06:59:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082394 00:01:32.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082394_collect-cpu-load.pm.log 00:01:32.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082394_collect-vmstat.pm.log 00:01:32.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082394_collect-cpu-temp.pm.log 00:01:32.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082394_collect-bmc-pm.bmc.pm.log 00:01:33.827 06:59:55 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:33.827 06:59:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:33.827 06:59:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:33.827 06:59:55 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.827 06:59:55 -- spdk/autobuild.sh@16 -- $ date -u 00:01:33.827 Wed Nov 20 05:59:55 AM UTC 2024 00:01:33.827 06:59:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:33.827 v25.01-pre-190-g9b64b1304 00:01:33.827 06:59:56 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:33.827 06:59:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:33.827 06:59:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:33.827 06:59:56 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:33.827 06:59:56 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:33.827 06:59:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.827 ************************************ 00:01:33.827 START TEST ubsan 00:01:33.827 ************************************ 00:01:33.827 06:59:56 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:33.827 using ubsan 00:01:33.827 00:01:33.827 real 0m0.001s 00:01:33.827 user 0m0.001s 00:01:33.827 sys 0m0.000s 00:01:33.827 06:59:56 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:33.827 06:59:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:33.827 ************************************ 00:01:33.827 END TEST ubsan 00:01:33.827 ************************************ 00:01:33.827 06:59:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:33.827 06:59:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:33.827 06:59:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:33.827 06:59:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:33.827 06:59:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:33.827 06:59:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:33.827 06:59:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:33.827 06:59:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:33.827 06:59:56 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:34.088 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:34.088 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:34.661 Using 'verbs' RDMA provider 00:01:50.146 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:05.057 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:05.057 Creating mk/config.mk...done. 00:02:05.057 Creating mk/cc.flags.mk...done. 00:02:05.057 Type 'make' to build. 00:02:05.057 07:00:25 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:05.057 07:00:25 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:05.057 07:00:25 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:05.057 07:00:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.057 ************************************ 00:02:05.057 START TEST make 00:02:05.057 ************************************ 00:02:05.057 07:00:25 make -- common/autotest_common.sh@1127 -- $ make -j144 00:02:05.057 make[1]: Nothing to be done for 'all'. 00:02:05.317 The Meson build system 00:02:05.317 Version: 1.5.0 00:02:05.317 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:05.317 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:05.317 Build type: native build 00:02:05.317 Project name: libvfio-user 00:02:05.317 Project version: 0.0.1 00:02:05.317 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:05.317 C linker for the host machine: cc ld.bfd 2.40-14 00:02:05.317 Host machine cpu family: x86_64 00:02:05.317 Host machine cpu: x86_64 00:02:05.317 Run-time dependency threads found: YES 00:02:05.317 Library dl found: YES 00:02:05.317 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:05.317 Run-time dependency json-c found: YES 0.17 00:02:05.317 Run-time dependency cmocka found: YES 1.1.7 00:02:05.317 Program pytest-3 found: NO 00:02:05.317 Program flake8 found: NO 00:02:05.317 Program misspell-fixer found: NO 00:02:05.317 Program restructuredtext-lint found: NO 00:02:05.317 Program valgrind found: YES (/usr/bin/valgrind) 00:02:05.317 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.317 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.317 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.317 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:05.317 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:05.317 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:05.317 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:05.317 Build targets in project: 8 00:02:05.317 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:05.317 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:05.317 00:02:05.317 libvfio-user 0.0.1 00:02:05.317 00:02:05.317 User defined options 00:02:05.317 buildtype : debug 00:02:05.317 default_library: shared 00:02:05.317 libdir : /usr/local/lib 00:02:05.317 00:02:05.317 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.576 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:05.836 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:05.836 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:05.836 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:05.836 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:05.836 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:05.836 [6/37] Compiling C object samples/null.p/null.c.o 00:02:05.836 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:05.836 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:05.836 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:05.836 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:05.836 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:05.836 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:05.836 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:05.836 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:05.836 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:05.836 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:05.836 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:05.836 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:05.836 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:05.836 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:05.836 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:05.836 [22/37] Compiling C object samples/server.p/server.c.o 00:02:05.836 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:05.836 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:05.836 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:05.836 [26/37] Compiling C object samples/client.p/client.c.o 00:02:05.836 [27/37] Linking target samples/client 00:02:05.836 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:05.836 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:05.836 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:05.836 [31/37] Linking target test/unit_tests 00:02:06.097 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:06.097 [33/37] Linking target samples/server 00:02:06.097 [34/37] Linking target samples/null 00:02:06.097 [35/37] Linking target samples/lspci 00:02:06.097 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:06.097 [37/37] Linking target samples/gpio-pci-idio-16 00:02:06.097 INFO: autodetecting backend as ninja 00:02:06.097 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:06.357 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:06.618 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:06.618 ninja: no work to do. 00:02:13.194 The Meson build system 00:02:13.194 Version: 1.5.0 00:02:13.194 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:13.194 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:13.194 Build type: native build 00:02:13.194 Program cat found: YES (/usr/bin/cat) 00:02:13.194 Project name: DPDK 00:02:13.194 Project version: 24.03.0 00:02:13.194 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:13.194 C linker for the host machine: cc ld.bfd 2.40-14 00:02:13.194 Host machine cpu family: x86_64 00:02:13.194 Host machine cpu: x86_64 00:02:13.194 Message: ## Building in Developer Mode ## 00:02:13.194 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:13.194 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:13.194 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:13.194 Program python3 found: YES (/usr/bin/python3) 00:02:13.194 Program cat found: YES (/usr/bin/cat) 00:02:13.194 Compiler for C supports arguments -march=native: YES 00:02:13.194 Checking for size of "void *" : 8 00:02:13.194 Checking for size of "void *" : 8 (cached) 00:02:13.194 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:13.194 Library m found: YES 00:02:13.194 Library numa found: YES 00:02:13.194 Has header "numaif.h" : YES 00:02:13.194 Library fdt found: NO 00:02:13.194 Library execinfo found: NO 00:02:13.194 Has header "execinfo.h" : YES 00:02:13.194 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:13.195 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:13.195 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:13.195 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:13.195 Run-time dependency openssl found: YES 3.1.1 00:02:13.195 Run-time dependency libpcap found: YES 1.10.4 00:02:13.195 Has header "pcap.h" with dependency libpcap: YES 00:02:13.195 Compiler for C supports arguments -Wcast-qual: YES 00:02:13.195 Compiler for C supports arguments -Wdeprecated: YES 00:02:13.195 Compiler for C supports arguments -Wformat: YES 00:02:13.195 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:13.195 Compiler for C supports arguments -Wformat-security: NO 00:02:13.195 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:13.195 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:13.195 Compiler for C supports arguments -Wnested-externs: YES 00:02:13.195 Compiler for C supports arguments -Wold-style-definition: YES 00:02:13.195 Compiler for C supports arguments -Wpointer-arith: YES 00:02:13.195 Compiler for C supports arguments -Wsign-compare: YES 00:02:13.195 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:13.195 Compiler for C supports arguments -Wundef: YES 00:02:13.195 Compiler for C supports arguments -Wwrite-strings: YES 00:02:13.195 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:13.195 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:13.195 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:13.195 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:13.195 Program objdump found: YES (/usr/bin/objdump) 00:02:13.195 Compiler for C supports arguments -mavx512f: YES 00:02:13.195 Checking if "AVX512 checking" compiles: YES 00:02:13.195 Fetching value of define "__SSE4_2__" : 1 00:02:13.195 Fetching value of define "__AES__" : 1 00:02:13.195 Fetching value of define "__AVX__" : 1 00:02:13.195 Fetching value of define "__AVX2__" : 1 00:02:13.195 Fetching value of define "__AVX512BW__" : 1 00:02:13.195 Fetching value of define "__AVX512CD__" : 1 00:02:13.195 Fetching value of define "__AVX512DQ__" : 1 00:02:13.195 Fetching value of define "__AVX512F__" : 1 00:02:13.195 Fetching value of define "__AVX512VL__" : 1 00:02:13.195 Fetching value of define "__PCLMUL__" : 1 00:02:13.195 Fetching value of define "__RDRND__" : 1 00:02:13.195 Fetching value of define "__RDSEED__" : 1 00:02:13.195 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:13.195 Fetching value of define "__znver1__" : (undefined) 00:02:13.195 Fetching value of define "__znver2__" : (undefined) 00:02:13.195 Fetching value of define "__znver3__" : (undefined) 00:02:13.195 Fetching value of define "__znver4__" : (undefined) 00:02:13.195 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:13.195 Message: lib/log: Defining dependency "log" 00:02:13.195 Message: lib/kvargs: Defining dependency "kvargs" 00:02:13.195 Message: lib/telemetry: Defining dependency "telemetry" 00:02:13.195 Checking for function "getentropy" : NO 00:02:13.195 Message: lib/eal: Defining dependency "eal" 00:02:13.195 Message: lib/ring: Defining dependency "ring" 00:02:13.195 Message: lib/rcu: Defining dependency "rcu" 00:02:13.195 Message: lib/mempool: Defining dependency "mempool" 00:02:13.195 Message: lib/mbuf: Defining dependency "mbuf" 00:02:13.195 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:13.195 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.195 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.195 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.195 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:13.195 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:13.195 Compiler for C supports arguments -mpclmul: YES 00:02:13.195 Compiler for C supports arguments -maes: YES 00:02:13.195 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:13.195 Compiler for C supports arguments -mavx512bw: YES 00:02:13.195 Compiler for C supports arguments -mavx512dq: YES 00:02:13.195 Compiler for C supports arguments -mavx512vl: YES 00:02:13.195 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:13.195 Compiler for C supports arguments -mavx2: YES 00:02:13.195 Compiler for C supports arguments -mavx: YES 00:02:13.195 Message: lib/net: Defining dependency "net" 00:02:13.195 Message: lib/meter: Defining dependency "meter" 00:02:13.195 Message: lib/ethdev: Defining dependency "ethdev" 00:02:13.195 Message: lib/pci: Defining dependency "pci" 00:02:13.195 Message: lib/cmdline: Defining dependency "cmdline" 00:02:13.195 Message: lib/hash: Defining dependency "hash" 00:02:13.195 Message: lib/timer: Defining dependency "timer" 00:02:13.195 Message: lib/compressdev: Defining dependency "compressdev" 00:02:13.195 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:13.195 Message: lib/dmadev: Defining dependency "dmadev" 00:02:13.195 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:13.195 Message: lib/power: Defining dependency "power" 00:02:13.195 Message: lib/reorder: Defining dependency "reorder" 00:02:13.195 Message: lib/security: Defining dependency "security" 00:02:13.195 Has header "linux/userfaultfd.h" : YES 00:02:13.195 Has header "linux/vduse.h" : YES 00:02:13.195 Message: lib/vhost: Defining dependency "vhost" 00:02:13.195 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:13.195 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:13.195 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:13.195 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:13.195 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:13.195 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:13.195 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:13.195 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:13.195 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:13.195 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:13.195 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:13.195 Configuring doxy-api-html.conf using configuration 00:02:13.195 Configuring doxy-api-man.conf using configuration 00:02:13.195 Program mandb found: YES (/usr/bin/mandb) 00:02:13.195 Program sphinx-build found: NO 00:02:13.195 Configuring rte_build_config.h using configuration 00:02:13.195 Message: 00:02:13.195 ================= 00:02:13.195 Applications Enabled 00:02:13.195 ================= 00:02:13.195 00:02:13.195 apps: 00:02:13.195 00:02:13.195 00:02:13.195 Message: 00:02:13.195 ================= 00:02:13.195 Libraries Enabled 00:02:13.195 ================= 00:02:13.195 00:02:13.195 libs: 00:02:13.195 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:13.195 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:13.195 cryptodev, dmadev, power, reorder, security, vhost, 00:02:13.195 00:02:13.195 Message: 00:02:13.195 =============== 00:02:13.195 Drivers Enabled 00:02:13.195 =============== 00:02:13.195 00:02:13.195 common: 00:02:13.195 00:02:13.195 bus: 00:02:13.195 pci, vdev, 00:02:13.195 mempool: 00:02:13.195 ring, 00:02:13.195 dma: 00:02:13.195 00:02:13.195 net: 00:02:13.195 00:02:13.195 crypto: 00:02:13.195 00:02:13.195 compress: 00:02:13.195 00:02:13.195 vdpa: 00:02:13.195 00:02:13.195 00:02:13.195 Message: 00:02:13.195 ================= 00:02:13.195 Content Skipped 00:02:13.195 ================= 00:02:13.195 00:02:13.195 apps: 00:02:13.195 dumpcap: explicitly disabled via build config 00:02:13.195 graph: explicitly disabled via build config 00:02:13.195 pdump: explicitly disabled via build config 00:02:13.195 proc-info: explicitly disabled via build config 00:02:13.195 test-acl: explicitly disabled via build config 00:02:13.195 test-bbdev: explicitly disabled via build config 00:02:13.195 test-cmdline: explicitly disabled via build config 00:02:13.195 test-compress-perf: explicitly disabled via build config 00:02:13.195 test-crypto-perf: explicitly disabled via build config 00:02:13.195 test-dma-perf: explicitly disabled via build config 00:02:13.195 test-eventdev: explicitly disabled via build config 00:02:13.195 test-fib: explicitly disabled via build config 00:02:13.195 test-flow-perf: explicitly disabled via build config 00:02:13.195 test-gpudev: explicitly disabled via build config 00:02:13.195 test-mldev: explicitly disabled via build config 00:02:13.195 test-pipeline: explicitly disabled via build config 00:02:13.195 test-pmd: explicitly disabled via build config 00:02:13.195 test-regex: explicitly disabled via build config 00:02:13.195 test-sad: explicitly disabled via build config 00:02:13.195 test-security-perf: explicitly disabled via build config 00:02:13.195 00:02:13.195 libs: 00:02:13.195 argparse: explicitly disabled via build config 00:02:13.195 metrics: explicitly disabled via build config 00:02:13.195 acl: explicitly disabled via build config 00:02:13.195 bbdev: explicitly disabled via build config 00:02:13.195 bitratestats: explicitly disabled via build config 00:02:13.195 bpf: explicitly disabled via build config 00:02:13.195 cfgfile: explicitly disabled via build config 00:02:13.195 distributor: explicitly disabled via build config 00:02:13.195 efd: explicitly disabled via build config 00:02:13.195 eventdev: explicitly disabled via build config 00:02:13.195 dispatcher: explicitly disabled via build config 00:02:13.195 gpudev: explicitly disabled via build config 00:02:13.195 gro: explicitly disabled via build config 00:02:13.195 gso: explicitly disabled via build config 00:02:13.195 ip_frag: explicitly disabled via build config 00:02:13.195 jobstats: explicitly disabled via build config 00:02:13.195 latencystats: explicitly disabled via build config 00:02:13.195 lpm: explicitly disabled via build config 00:02:13.195 member: explicitly disabled via build config 00:02:13.195 pcapng: explicitly disabled via build config 00:02:13.195 rawdev: explicitly disabled via build config 00:02:13.195 regexdev: explicitly disabled via build config 00:02:13.195 mldev: explicitly disabled via build config 00:02:13.195 rib: explicitly disabled via build config 00:02:13.195 sched: explicitly disabled via build config 00:02:13.195 stack: explicitly disabled via build config 00:02:13.195 ipsec: explicitly disabled via build config 00:02:13.195 pdcp: explicitly disabled via build config 00:02:13.195 fib: explicitly disabled via build config 00:02:13.195 port: explicitly disabled via build config 00:02:13.195 pdump: explicitly disabled via build config 00:02:13.195 table: explicitly disabled via build config 00:02:13.195 pipeline: explicitly disabled via build config 00:02:13.195 graph: explicitly disabled via build config 00:02:13.195 node: explicitly disabled via build config 00:02:13.195 00:02:13.195 drivers: 00:02:13.195 common/cpt: not in enabled drivers build config 00:02:13.195 common/dpaax: not in enabled drivers build config 00:02:13.195 common/iavf: not in enabled drivers build config 00:02:13.195 common/idpf: not in enabled drivers build config 00:02:13.195 common/ionic: not in enabled drivers build config 00:02:13.195 common/mvep: not in enabled drivers build config 00:02:13.195 common/octeontx: not in enabled drivers build config 00:02:13.195 bus/auxiliary: not in enabled drivers build config 00:02:13.195 bus/cdx: not in enabled drivers build config 00:02:13.195 bus/dpaa: not in enabled drivers build config 00:02:13.195 bus/fslmc: not in enabled drivers build config 00:02:13.195 bus/ifpga: not in enabled drivers build config 00:02:13.195 bus/platform: not in enabled drivers build config 00:02:13.195 bus/uacce: not in enabled drivers build config 00:02:13.195 bus/vmbus: not in enabled drivers build config 00:02:13.195 common/cnxk: not in enabled drivers build config 00:02:13.195 common/mlx5: not in enabled drivers build config 00:02:13.195 common/nfp: not in enabled drivers build config 00:02:13.195 common/nitrox: not in enabled drivers build config 00:02:13.195 common/qat: not in enabled drivers build config 00:02:13.195 common/sfc_efx: not in enabled drivers build config 00:02:13.195 mempool/bucket: not in enabled drivers build config 00:02:13.195 mempool/cnxk: not in enabled drivers build config 00:02:13.195 mempool/dpaa: not in enabled drivers build config 00:02:13.195 mempool/dpaa2: not in enabled drivers build config 00:02:13.195 mempool/octeontx: not in enabled drivers build config 00:02:13.195 mempool/stack: not in enabled drivers build config 00:02:13.195 dma/cnxk: not in enabled drivers build config 00:02:13.195 dma/dpaa: not in enabled drivers build config 00:02:13.196 dma/dpaa2: not in enabled drivers build config 00:02:13.196 dma/hisilicon: not in enabled drivers build config 00:02:13.196 dma/idxd: not in enabled drivers build config 00:02:13.196 dma/ioat: not in enabled drivers build config 00:02:13.196 dma/skeleton: not in enabled drivers build config 00:02:13.196 net/af_packet: not in enabled drivers build config 00:02:13.196 net/af_xdp: not in enabled drivers build config 00:02:13.196 net/ark: not in enabled drivers build config 00:02:13.196 net/atlantic: not in enabled drivers build config 00:02:13.196 net/avp: not in enabled drivers build config 00:02:13.196 net/axgbe: not in enabled drivers build config 00:02:13.196 net/bnx2x: not in enabled drivers build config 00:02:13.196 net/bnxt: not in enabled drivers build config 00:02:13.196 net/bonding: not in enabled drivers build config 00:02:13.196 net/cnxk: not in enabled drivers build config 00:02:13.196 net/cpfl: not in enabled drivers build config 00:02:13.196 net/cxgbe: not in enabled drivers build config 00:02:13.196 net/dpaa: not in enabled drivers build config 00:02:13.196 net/dpaa2: not in enabled drivers build config 00:02:13.196 net/e1000: not in enabled drivers build config 00:02:13.196 net/ena: not in enabled drivers build config 00:02:13.196 net/enetc: not in enabled drivers build config 00:02:13.196 net/enetfec: not in enabled drivers build config 00:02:13.196 net/enic: not in enabled drivers build config 00:02:13.196 net/failsafe: not in enabled drivers build config 00:02:13.196 net/fm10k: not in enabled drivers build config 00:02:13.196 net/gve: not in enabled drivers build config 00:02:13.196 net/hinic: not in enabled drivers build config 00:02:13.196 net/hns3: not in enabled drivers build config 00:02:13.196 net/i40e: not in enabled drivers build config 00:02:13.196 net/iavf: not in enabled drivers build config 00:02:13.196 net/ice: not in enabled drivers build config 00:02:13.196 net/idpf: not in enabled drivers build config 00:02:13.196 net/igc: not in enabled drivers build config 00:02:13.196 net/ionic: not in enabled drivers build config 00:02:13.196 net/ipn3ke: not in enabled drivers build config 00:02:13.196 net/ixgbe: not in enabled drivers build config 00:02:13.196 net/mana: not in enabled drivers build config 00:02:13.196 net/memif: not in enabled drivers build config 00:02:13.196 net/mlx4: not in enabled drivers build config 00:02:13.196 net/mlx5: not in enabled drivers build config 00:02:13.196 net/mvneta: not in enabled drivers build config 00:02:13.196 net/mvpp2: not in enabled drivers build config 00:02:13.196 net/netvsc: not in enabled drivers build config 00:02:13.196 net/nfb: not in enabled drivers build config 00:02:13.196 net/nfp: not in enabled drivers build config 00:02:13.196 net/ngbe: not in enabled drivers build config 00:02:13.196 net/null: not in enabled drivers build config 00:02:13.196 net/octeontx: not in enabled drivers build config 00:02:13.196 net/octeon_ep: not in enabled drivers build config 00:02:13.196 net/pcap: not in enabled drivers build config 00:02:13.196 net/pfe: not in enabled drivers build config 00:02:13.196 net/qede: not in enabled drivers build config 00:02:13.196 net/ring: not in enabled drivers build config 00:02:13.196 net/sfc: not in enabled drivers build config 00:02:13.196 net/softnic: not in enabled drivers build config 00:02:13.196 net/tap: not in enabled drivers build config 00:02:13.196 net/thunderx: not in enabled drivers build config 00:02:13.196 net/txgbe: not in enabled drivers build config 00:02:13.196 net/vdev_netvsc: not in enabled drivers build config 00:02:13.196 net/vhost: not in enabled drivers build config 00:02:13.196 net/virtio: not in enabled drivers build config 00:02:13.196 net/vmxnet3: not in enabled drivers build config 00:02:13.196 raw/*: missing internal dependency, "rawdev" 00:02:13.196 crypto/armv8: not in enabled drivers build config 00:02:13.196 crypto/bcmfs: not in enabled drivers build config 00:02:13.196 crypto/caam_jr: not in enabled drivers build config 00:02:13.196 crypto/ccp: not in enabled drivers build config 00:02:13.196 crypto/cnxk: not in enabled drivers build config 00:02:13.196 crypto/dpaa_sec: not in enabled drivers build config 00:02:13.196 crypto/dpaa2_sec: not in enabled drivers build config 00:02:13.196 crypto/ipsec_mb: not in enabled drivers build config 00:02:13.196 crypto/mlx5: not in enabled drivers build config 00:02:13.196 crypto/mvsam: not in enabled drivers build config 00:02:13.196 crypto/nitrox: not in enabled drivers build config 00:02:13.196 crypto/null: not in enabled drivers build config 00:02:13.196 crypto/octeontx: not in enabled drivers build config 00:02:13.196 crypto/openssl: not in enabled drivers build config 00:02:13.196 crypto/scheduler: not in enabled drivers build config 00:02:13.196 crypto/uadk: not in enabled drivers build config 00:02:13.196 crypto/virtio: not in enabled drivers build config 00:02:13.196 compress/isal: not in enabled drivers build config 00:02:13.196 compress/mlx5: not in enabled drivers build config 00:02:13.196 compress/nitrox: not in enabled drivers build config 00:02:13.196 compress/octeontx: not in enabled drivers build config 00:02:13.196 compress/zlib: not in enabled drivers build config 00:02:13.196 regex/*: missing internal dependency, "regexdev" 00:02:13.196 ml/*: missing internal dependency, "mldev" 00:02:13.196 vdpa/ifc: not in enabled drivers build config 00:02:13.196 vdpa/mlx5: not in enabled drivers build config 00:02:13.196 vdpa/nfp: not in enabled drivers build config 00:02:13.196 vdpa/sfc: not in enabled drivers build config 00:02:13.196 event/*: missing internal dependency, "eventdev" 00:02:13.196 baseband/*: missing internal dependency, "bbdev" 00:02:13.196 gpu/*: missing internal dependency, "gpudev" 00:02:13.196 00:02:13.196 00:02:13.196 Build targets in project: 84 00:02:13.196 00:02:13.196 DPDK 24.03.0 00:02:13.196 00:02:13.196 User defined options 00:02:13.196 buildtype : debug 00:02:13.196 default_library : shared 00:02:13.196 libdir : lib 00:02:13.196 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:13.196 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:13.196 c_link_args : 00:02:13.196 cpu_instruction_set: native 00:02:13.196 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:13.196 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:13.196 enable_docs : false 00:02:13.196 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:13.196 enable_kmods : false 00:02:13.196 max_lcores : 128 00:02:13.196 tests : false 00:02:13.196 00:02:13.196 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:13.196 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:13.196 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:13.196 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:13.196 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:13.196 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:13.196 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:13.196 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:13.196 [7/267] Linking static target lib/librte_kvargs.a 00:02:13.196 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:13.196 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:13.196 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:13.196 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:13.196 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:13.196 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:13.196 [14/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:13.196 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:13.196 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:13.196 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:13.196 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:13.196 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:13.196 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:13.196 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:13.196 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:13.196 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:13.196 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:13.196 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:13.196 [26/267] Linking static target lib/librte_log.a 00:02:13.196 [27/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:13.196 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:13.196 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:13.196 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:13.196 [31/267] Linking static target lib/librte_pci.a 00:02:13.196 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:13.196 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:13.196 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:13.455 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:13.455 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:13.455 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:13.455 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:13.455 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.455 [40/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.455 [41/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:13.455 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:13.455 [43/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:13.455 [44/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:13.716 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:13.716 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:13.716 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:13.716 [48/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:13.716 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:13.716 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:13.716 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:13.716 [52/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:13.716 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:13.716 [54/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:13.716 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:13.716 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:13.716 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:13.716 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:13.716 [59/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:13.716 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:13.716 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:13.716 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:13.716 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:13.716 [64/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:13.716 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:13.716 [66/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:13.716 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:13.716 [68/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:13.716 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:13.716 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:13.716 [71/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:13.716 [72/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:13.716 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:13.716 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:13.716 [75/267] Linking static target lib/librte_dmadev.a 00:02:13.716 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:13.716 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:13.716 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:13.716 [79/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:13.716 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:13.716 [81/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:13.716 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:13.716 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:13.716 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:13.716 [85/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:13.716 [86/267] Linking static target lib/librte_meter.a 00:02:13.716 [87/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:13.716 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:13.716 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:13.716 [90/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:13.716 [91/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:13.716 [92/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:13.717 [93/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:13.717 [94/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:13.717 [95/267] Linking static target lib/librte_ring.a 00:02:13.717 [96/267] Linking static target lib/librte_telemetry.a 00:02:13.717 [97/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:13.717 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:13.717 [99/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:13.717 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:13.717 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.717 [102/267] Linking static target lib/librte_timer.a 00:02:13.717 [103/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:13.717 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.717 [105/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:13.717 [106/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:13.717 [107/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.717 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:13.717 [109/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:13.717 [110/267] Linking static target lib/librte_cmdline.a 00:02:13.717 [111/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:13.717 [112/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:13.717 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:13.717 [114/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:13.717 [115/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.717 [116/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:13.717 [117/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:13.717 [118/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:13.717 [119/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:13.717 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:13.717 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:13.717 [122/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:13.717 [123/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:13.717 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:13.717 [125/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:13.717 [126/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:13.717 [127/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.717 [128/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:13.717 [129/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:13.717 [130/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:13.717 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:13.717 [132/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:13.717 [133/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:13.717 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:13.717 [135/267] Linking static target lib/librte_power.a 00:02:13.717 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:13.717 [137/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.717 [138/267] Linking static target lib/librte_compressdev.a 00:02:13.717 [139/267] Linking static target lib/librte_mempool.a 00:02:13.717 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:13.717 [141/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:13.717 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:13.717 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:13.717 [144/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.717 [145/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.717 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:13.717 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:13.717 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:13.717 [149/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:13.717 [150/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.717 [151/267] Linking static target lib/librte_rcu.a 00:02:13.717 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:13.717 [153/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:13.717 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:13.717 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:13.717 [156/267] Linking static target lib/librte_net.a 00:02:13.717 [157/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:13.717 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:13.717 [159/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.717 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:13.717 [161/267] Linking static target lib/librte_reorder.a 00:02:13.717 [162/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:13.717 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:13.717 [164/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:13.717 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:13.717 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:13.717 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:13.977 [168/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:13.977 [169/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:13.977 [170/267] Linking target lib/librte_log.so.24.1 00:02:13.977 [171/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:13.977 [172/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:13.977 [173/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:13.977 [174/267] Linking static target lib/librte_eal.a 00:02:13.977 [175/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:13.977 [176/267] Linking static target lib/librte_security.a 00:02:13.977 [177/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:13.977 [178/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:13.977 [179/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.977 [180/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.977 [181/267] Linking static target drivers/librte_bus_vdev.a 00:02:13.977 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:13.977 [183/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.977 [184/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:13.977 [185/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.977 [186/267] Linking static target lib/librte_mbuf.a 00:02:13.977 [187/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:13.977 [188/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:13.977 [189/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.977 [190/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:13.977 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:13.977 [192/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:13.977 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:13.977 [194/267] Linking static target lib/librte_hash.a 00:02:13.977 [195/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:13.977 [196/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.977 [197/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.977 [198/267] Linking target lib/librte_kvargs.so.24.1 00:02:13.977 [199/267] Linking static target drivers/librte_bus_pci.a 00:02:13.977 [200/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.978 [201/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.978 [202/267] Linking static target drivers/librte_mempool_ring.a 00:02:14.238 [203/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.238 [204/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.238 [205/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.238 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:14.238 [207/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:14.238 [208/267] Linking static target lib/librte_cryptodev.a 00:02:14.238 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.238 [210/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.238 [211/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.238 [212/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:14.238 [213/267] Linking target lib/librte_telemetry.so.24.1 00:02:14.238 [214/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.498 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:14.498 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.498 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.757 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:14.757 [219/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.757 [220/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:14.757 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.757 [222/267] Linking static target lib/librte_ethdev.a 00:02:14.757 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.020 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.020 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.020 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.652 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:15.652 [228/267] Linking static target lib/librte_vhost.a 00:02:16.633 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.014 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.594 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.534 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.534 [233/267] Linking target lib/librte_eal.so.24.1 00:02:25.534 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:25.534 [235/267] Linking target lib/librte_ring.so.24.1 00:02:25.534 [236/267] Linking target lib/librte_meter.so.24.1 00:02:25.534 [237/267] Linking target lib/librte_pci.so.24.1 00:02:25.795 [238/267] Linking target lib/librte_timer.so.24.1 00:02:25.795 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:25.795 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:25.795 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:25.795 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:25.795 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:25.795 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:25.795 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:25.795 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:25.795 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:25.795 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:26.054 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:26.054 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:26.054 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:26.054 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:26.054 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:26.315 [254/267] Linking target lib/librte_net.so.24.1 00:02:26.315 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:26.315 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:26.315 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:26.315 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:26.315 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:26.315 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:26.315 [261/267] Linking target lib/librte_hash.so.24.1 00:02:26.315 [262/267] Linking target lib/librte_security.so.24.1 00:02:26.315 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:26.574 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.574 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.574 [266/267] Linking target lib/librte_power.so.24.1 00:02:26.574 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:26.574 INFO: autodetecting backend as ninja 00:02:26.574 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:30.779 CC lib/log/log.o 00:02:30.779 CC lib/ut/ut.o 00:02:30.779 CC lib/log/log_flags.o 00:02:30.779 CC lib/ut_mock/mock.o 00:02:30.779 CC lib/log/log_deprecated.o 00:02:30.779 LIB libspdk_ut.a 00:02:30.779 LIB libspdk_log.a 00:02:30.779 LIB libspdk_ut_mock.a 00:02:30.779 SO libspdk_ut.so.2.0 00:02:30.779 SO libspdk_log.so.7.1 00:02:30.779 SO libspdk_ut_mock.so.6.0 00:02:30.779 SYMLINK libspdk_ut.so 00:02:30.779 SYMLINK libspdk_ut_mock.so 00:02:30.779 SYMLINK libspdk_log.so 00:02:30.779 CC lib/util/base64.o 00:02:30.779 CC lib/dma/dma.o 00:02:30.779 CC lib/util/bit_array.o 00:02:30.779 CC lib/util/cpuset.o 00:02:30.779 CC lib/util/crc16.o 00:02:30.779 CC lib/util/crc32.o 00:02:30.779 CC lib/util/crc32c.o 00:02:30.779 CC lib/util/crc32_ieee.o 00:02:30.779 CC lib/ioat/ioat.o 00:02:30.779 CC lib/util/crc64.o 00:02:30.779 CC lib/util/dif.o 00:02:30.779 CXX lib/trace_parser/trace.o 00:02:30.779 CC lib/util/fd.o 00:02:30.779 CC lib/util/fd_group.o 00:02:30.779 CC lib/util/file.o 00:02:30.779 CC lib/util/hexlify.o 00:02:30.779 CC lib/util/iov.o 00:02:30.779 CC lib/util/math.o 00:02:30.779 CC lib/util/net.o 00:02:30.779 CC lib/util/pipe.o 00:02:30.779 CC lib/util/strerror_tls.o 00:02:30.779 CC lib/util/string.o 00:02:30.779 CC lib/util/uuid.o 00:02:30.779 CC lib/util/xor.o 00:02:30.779 CC lib/util/zipf.o 00:02:30.779 CC lib/util/md5.o 00:02:31.041 CC lib/vfio_user/host/vfio_user_pci.o 00:02:31.041 CC lib/vfio_user/host/vfio_user.o 00:02:31.041 LIB libspdk_dma.a 00:02:31.041 SO libspdk_dma.so.5.0 00:02:31.041 LIB libspdk_ioat.a 00:02:31.041 SYMLINK libspdk_dma.so 00:02:31.041 SO libspdk_ioat.so.7.0 00:02:31.041 SYMLINK libspdk_ioat.so 00:02:31.302 LIB libspdk_vfio_user.a 00:02:31.302 SO libspdk_vfio_user.so.5.0 00:02:31.302 SYMLINK libspdk_vfio_user.so 00:02:31.302 LIB libspdk_util.a 00:02:31.302 SO libspdk_util.so.10.1 00:02:31.563 SYMLINK libspdk_util.so 00:02:31.563 LIB libspdk_trace_parser.a 00:02:31.563 SO libspdk_trace_parser.so.6.0 00:02:31.824 SYMLINK libspdk_trace_parser.so 00:02:31.824 CC lib/conf/conf.o 00:02:31.824 CC lib/env_dpdk/env.o 00:02:31.824 CC lib/idxd/idxd.o 00:02:31.824 CC lib/env_dpdk/memory.o 00:02:31.824 CC lib/idxd/idxd_user.o 00:02:31.824 CC lib/vmd/vmd.o 00:02:31.824 CC lib/json/json_parse.o 00:02:31.824 CC lib/env_dpdk/pci.o 00:02:31.824 CC lib/idxd/idxd_kernel.o 00:02:31.824 CC lib/vmd/led.o 00:02:31.824 CC lib/rdma_utils/rdma_utils.o 00:02:31.824 CC lib/json/json_util.o 00:02:31.824 CC lib/env_dpdk/init.o 00:02:31.824 CC lib/env_dpdk/threads.o 00:02:31.824 CC lib/env_dpdk/pci_ioat.o 00:02:31.824 CC lib/json/json_write.o 00:02:31.824 CC lib/env_dpdk/pci_virtio.o 00:02:31.824 CC lib/env_dpdk/pci_vmd.o 00:02:31.824 CC lib/env_dpdk/pci_idxd.o 00:02:31.824 CC lib/env_dpdk/pci_event.o 00:02:31.824 CC lib/env_dpdk/sigbus_handler.o 00:02:31.824 CC lib/env_dpdk/pci_dpdk.o 00:02:31.824 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:31.824 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:32.085 LIB libspdk_conf.a 00:02:32.085 SO libspdk_conf.so.6.0 00:02:32.085 LIB libspdk_rdma_utils.a 00:02:32.346 LIB libspdk_json.a 00:02:32.346 SO libspdk_rdma_utils.so.1.0 00:02:32.346 SYMLINK libspdk_conf.so 00:02:32.346 SO libspdk_json.so.6.0 00:02:32.346 SYMLINK libspdk_rdma_utils.so 00:02:32.346 SYMLINK libspdk_json.so 00:02:32.346 LIB libspdk_idxd.a 00:02:32.607 SO libspdk_idxd.so.12.1 00:02:32.607 LIB libspdk_vmd.a 00:02:32.607 SO libspdk_vmd.so.6.0 00:02:32.607 SYMLINK libspdk_idxd.so 00:02:32.607 SYMLINK libspdk_vmd.so 00:02:32.607 CC lib/rdma_provider/common.o 00:02:32.607 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:32.607 CC lib/jsonrpc/jsonrpc_server.o 00:02:32.607 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:32.607 CC lib/jsonrpc/jsonrpc_client.o 00:02:32.607 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:32.868 LIB libspdk_rdma_provider.a 00:02:32.868 SO libspdk_rdma_provider.so.7.0 00:02:32.868 LIB libspdk_jsonrpc.a 00:02:33.128 SO libspdk_jsonrpc.so.6.0 00:02:33.128 SYMLINK libspdk_rdma_provider.so 00:02:33.128 SYMLINK libspdk_jsonrpc.so 00:02:33.128 LIB libspdk_env_dpdk.a 00:02:33.128 SO libspdk_env_dpdk.so.15.1 00:02:33.388 SYMLINK libspdk_env_dpdk.so 00:02:33.388 CC lib/rpc/rpc.o 00:02:33.648 LIB libspdk_rpc.a 00:02:33.648 SO libspdk_rpc.so.6.0 00:02:33.648 SYMLINK libspdk_rpc.so 00:02:34.219 CC lib/keyring/keyring.o 00:02:34.219 CC lib/keyring/keyring_rpc.o 00:02:34.219 CC lib/trace/trace.o 00:02:34.219 CC lib/trace/trace_flags.o 00:02:34.219 CC lib/trace/trace_rpc.o 00:02:34.219 CC lib/notify/notify.o 00:02:34.219 CC lib/notify/notify_rpc.o 00:02:34.219 LIB libspdk_notify.a 00:02:34.479 SO libspdk_notify.so.6.0 00:02:34.479 LIB libspdk_keyring.a 00:02:34.479 LIB libspdk_trace.a 00:02:34.479 SO libspdk_keyring.so.2.0 00:02:34.479 SO libspdk_trace.so.11.0 00:02:34.479 SYMLINK libspdk_notify.so 00:02:34.479 SYMLINK libspdk_keyring.so 00:02:34.479 SYMLINK libspdk_trace.so 00:02:34.740 CC lib/thread/thread.o 00:02:34.740 CC lib/thread/iobuf.o 00:02:34.740 CC lib/sock/sock.o 00:02:34.740 CC lib/sock/sock_rpc.o 00:02:35.311 LIB libspdk_sock.a 00:02:35.311 SO libspdk_sock.so.10.0 00:02:35.311 SYMLINK libspdk_sock.so 00:02:35.882 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:35.882 CC lib/nvme/nvme_ctrlr.o 00:02:35.882 CC lib/nvme/nvme_fabric.o 00:02:35.882 CC lib/nvme/nvme_ns_cmd.o 00:02:35.882 CC lib/nvme/nvme_ns.o 00:02:35.882 CC lib/nvme/nvme_pcie_common.o 00:02:35.882 CC lib/nvme/nvme_pcie.o 00:02:35.882 CC lib/nvme/nvme_qpair.o 00:02:35.882 CC lib/nvme/nvme.o 00:02:35.882 CC lib/nvme/nvme_quirks.o 00:02:35.882 CC lib/nvme/nvme_transport.o 00:02:35.882 CC lib/nvme/nvme_discovery.o 00:02:35.882 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:35.882 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:35.882 CC lib/nvme/nvme_tcp.o 00:02:35.882 CC lib/nvme/nvme_opal.o 00:02:35.882 CC lib/nvme/nvme_io_msg.o 00:02:35.882 CC lib/nvme/nvme_poll_group.o 00:02:35.882 CC lib/nvme/nvme_zns.o 00:02:35.882 CC lib/nvme/nvme_stubs.o 00:02:35.882 CC lib/nvme/nvme_auth.o 00:02:35.882 CC lib/nvme/nvme_cuse.o 00:02:35.882 CC lib/nvme/nvme_vfio_user.o 00:02:35.882 CC lib/nvme/nvme_rdma.o 00:02:36.143 LIB libspdk_thread.a 00:02:36.143 SO libspdk_thread.so.11.0 00:02:36.404 SYMLINK libspdk_thread.so 00:02:36.664 CC lib/blob/blobstore.o 00:02:36.664 CC lib/blob/request.o 00:02:36.664 CC lib/blob/blob_bs_dev.o 00:02:36.664 CC lib/blob/zeroes.o 00:02:36.664 CC lib/vfu_tgt/tgt_endpoint.o 00:02:36.664 CC lib/vfu_tgt/tgt_rpc.o 00:02:36.664 CC lib/accel/accel.o 00:02:36.665 CC lib/accel/accel_rpc.o 00:02:36.665 CC lib/accel/accel_sw.o 00:02:36.665 CC lib/fsdev/fsdev.o 00:02:36.665 CC lib/fsdev/fsdev_io.o 00:02:36.665 CC lib/virtio/virtio.o 00:02:36.665 CC lib/fsdev/fsdev_rpc.o 00:02:36.665 CC lib/virtio/virtio_vhost_user.o 00:02:36.665 CC lib/virtio/virtio_vfio_user.o 00:02:36.665 CC lib/virtio/virtio_pci.o 00:02:36.665 CC lib/init/json_config.o 00:02:36.665 CC lib/init/subsystem.o 00:02:36.665 CC lib/init/subsystem_rpc.o 00:02:36.665 CC lib/init/rpc.o 00:02:36.924 LIB libspdk_init.a 00:02:36.924 SO libspdk_init.so.6.0 00:02:36.924 LIB libspdk_vfu_tgt.a 00:02:36.924 LIB libspdk_virtio.a 00:02:36.924 SO libspdk_vfu_tgt.so.3.0 00:02:36.924 SO libspdk_virtio.so.7.0 00:02:37.185 SYMLINK libspdk_init.so 00:02:37.185 SYMLINK libspdk_vfu_tgt.so 00:02:37.185 SYMLINK libspdk_virtio.so 00:02:37.185 LIB libspdk_fsdev.a 00:02:37.185 SO libspdk_fsdev.so.2.0 00:02:37.444 SYMLINK libspdk_fsdev.so 00:02:37.444 CC lib/event/app.o 00:02:37.444 CC lib/event/reactor.o 00:02:37.444 CC lib/event/log_rpc.o 00:02:37.444 CC lib/event/app_rpc.o 00:02:37.444 CC lib/event/scheduler_static.o 00:02:37.704 LIB libspdk_accel.a 00:02:37.704 LIB libspdk_nvme.a 00:02:37.704 SO libspdk_accel.so.16.0 00:02:37.704 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:37.704 SYMLINK libspdk_accel.so 00:02:37.704 SO libspdk_nvme.so.15.0 00:02:37.704 LIB libspdk_event.a 00:02:37.964 SO libspdk_event.so.14.0 00:02:37.964 SYMLINK libspdk_event.so 00:02:37.964 SYMLINK libspdk_nvme.so 00:02:38.225 CC lib/bdev/bdev.o 00:02:38.225 CC lib/bdev/bdev_rpc.o 00:02:38.225 CC lib/bdev/bdev_zone.o 00:02:38.225 CC lib/bdev/part.o 00:02:38.225 CC lib/bdev/scsi_nvme.o 00:02:38.225 LIB libspdk_fuse_dispatcher.a 00:02:38.485 SO libspdk_fuse_dispatcher.so.1.0 00:02:38.485 SYMLINK libspdk_fuse_dispatcher.so 00:02:39.425 LIB libspdk_blob.a 00:02:39.425 SO libspdk_blob.so.11.0 00:02:39.425 SYMLINK libspdk_blob.so 00:02:39.685 CC lib/blobfs/blobfs.o 00:02:39.685 CC lib/lvol/lvol.o 00:02:39.685 CC lib/blobfs/tree.o 00:02:40.626 LIB libspdk_bdev.a 00:02:40.626 LIB libspdk_blobfs.a 00:02:40.626 SO libspdk_bdev.so.17.0 00:02:40.626 SO libspdk_blobfs.so.10.0 00:02:40.626 SYMLINK libspdk_blobfs.so 00:02:40.626 LIB libspdk_lvol.a 00:02:40.626 SYMLINK libspdk_bdev.so 00:02:40.626 SO libspdk_lvol.so.10.0 00:02:40.626 SYMLINK libspdk_lvol.so 00:02:40.887 CC lib/scsi/dev.o 00:02:40.887 CC lib/nvmf/ctrlr.o 00:02:40.887 CC lib/scsi/lun.o 00:02:40.887 CC lib/nvmf/ctrlr_discovery.o 00:02:40.887 CC lib/nvmf/ctrlr_bdev.o 00:02:40.887 CC lib/scsi/port.o 00:02:40.887 CC lib/nvmf/subsystem.o 00:02:40.887 CC lib/scsi/scsi.o 00:02:40.887 CC lib/scsi/scsi_bdev.o 00:02:40.887 CC lib/nvmf/nvmf.o 00:02:40.887 CC lib/scsi/scsi_pr.o 00:02:40.887 CC lib/nvmf/nvmf_rpc.o 00:02:40.887 CC lib/ublk/ublk.o 00:02:40.887 CC lib/nvmf/transport.o 00:02:40.887 CC lib/scsi/scsi_rpc.o 00:02:40.887 CC lib/ublk/ublk_rpc.o 00:02:40.887 CC lib/nvmf/tcp.o 00:02:40.887 CC lib/scsi/task.o 00:02:40.887 CC lib/nbd/nbd.o 00:02:40.887 CC lib/nvmf/stubs.o 00:02:40.887 CC lib/nbd/nbd_rpc.o 00:02:40.887 CC lib/ftl/ftl_core.o 00:02:40.887 CC lib/nvmf/mdns_server.o 00:02:40.887 CC lib/ftl/ftl_init.o 00:02:40.887 CC lib/nvmf/vfio_user.o 00:02:40.887 CC lib/nvmf/rdma.o 00:02:40.887 CC lib/ftl/ftl_layout.o 00:02:40.887 CC lib/ftl/ftl_debug.o 00:02:40.887 CC lib/nvmf/auth.o 00:02:40.887 CC lib/ftl/ftl_io.o 00:02:40.887 CC lib/ftl/ftl_sb.o 00:02:40.887 CC lib/ftl/ftl_l2p.o 00:02:40.887 CC lib/ftl/ftl_l2p_flat.o 00:02:40.887 CC lib/ftl/ftl_nv_cache.o 00:02:41.148 CC lib/ftl/ftl_band.o 00:02:41.148 CC lib/ftl/ftl_band_ops.o 00:02:41.148 CC lib/ftl/ftl_writer.o 00:02:41.148 CC lib/ftl/ftl_rq.o 00:02:41.148 CC lib/ftl/ftl_reloc.o 00:02:41.148 CC lib/ftl/ftl_l2p_cache.o 00:02:41.148 CC lib/ftl/ftl_p2l.o 00:02:41.148 CC lib/ftl/ftl_p2l_log.o 00:02:41.148 CC lib/ftl/mngt/ftl_mngt.o 00:02:41.148 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:41.148 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:41.149 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:41.149 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:41.149 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:41.149 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:41.149 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:41.149 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:41.149 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:41.149 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:41.149 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:41.149 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:41.149 CC lib/ftl/utils/ftl_conf.o 00:02:41.149 CC lib/ftl/utils/ftl_md.o 00:02:41.149 CC lib/ftl/utils/ftl_mempool.o 00:02:41.149 CC lib/ftl/utils/ftl_bitmap.o 00:02:41.149 CC lib/ftl/utils/ftl_property.o 00:02:41.149 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:41.149 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:41.149 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:41.149 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:41.149 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:41.149 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:41.149 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:41.149 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:41.149 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:41.149 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:41.149 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:41.149 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:41.149 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:41.149 CC lib/ftl/base/ftl_base_dev.o 00:02:41.149 CC lib/ftl/base/ftl_base_bdev.o 00:02:41.149 CC lib/ftl/ftl_trace.o 00:02:41.718 LIB libspdk_nbd.a 00:02:41.718 SO libspdk_nbd.so.7.0 00:02:41.718 LIB libspdk_scsi.a 00:02:41.718 SYMLINK libspdk_nbd.so 00:02:41.718 SO libspdk_scsi.so.9.0 00:02:41.978 SYMLINK libspdk_scsi.so 00:02:41.978 LIB libspdk_ublk.a 00:02:41.978 SO libspdk_ublk.so.3.0 00:02:41.978 SYMLINK libspdk_ublk.so 00:02:42.239 LIB libspdk_ftl.a 00:02:42.239 CC lib/vhost/vhost.o 00:02:42.239 CC lib/vhost/vhost_rpc.o 00:02:42.239 CC lib/vhost/vhost_scsi.o 00:02:42.239 CC lib/vhost/vhost_blk.o 00:02:42.239 CC lib/iscsi/conn.o 00:02:42.239 CC lib/vhost/rte_vhost_user.o 00:02:42.239 CC lib/iscsi/init_grp.o 00:02:42.239 CC lib/iscsi/iscsi.o 00:02:42.239 CC lib/iscsi/param.o 00:02:42.239 CC lib/iscsi/portal_grp.o 00:02:42.239 CC lib/iscsi/tgt_node.o 00:02:42.239 CC lib/iscsi/iscsi_subsystem.o 00:02:42.239 CC lib/iscsi/iscsi_rpc.o 00:02:42.239 CC lib/iscsi/task.o 00:02:42.500 SO libspdk_ftl.so.9.0 00:02:42.760 SYMLINK libspdk_ftl.so 00:02:43.022 LIB libspdk_nvmf.a 00:02:43.284 SO libspdk_nvmf.so.20.0 00:02:43.284 LIB libspdk_vhost.a 00:02:43.284 SO libspdk_vhost.so.8.0 00:02:43.284 SYMLINK libspdk_nvmf.so 00:02:43.545 SYMLINK libspdk_vhost.so 00:02:43.545 LIB libspdk_iscsi.a 00:02:43.545 SO libspdk_iscsi.so.8.0 00:02:43.806 SYMLINK libspdk_iscsi.so 00:02:44.378 CC module/env_dpdk/env_dpdk_rpc.o 00:02:44.378 CC module/vfu_device/vfu_virtio.o 00:02:44.378 CC module/vfu_device/vfu_virtio_blk.o 00:02:44.378 CC module/vfu_device/vfu_virtio_scsi.o 00:02:44.378 CC module/vfu_device/vfu_virtio_rpc.o 00:02:44.378 CC module/vfu_device/vfu_virtio_fs.o 00:02:44.378 LIB libspdk_env_dpdk_rpc.a 00:02:44.378 CC module/blob/bdev/blob_bdev.o 00:02:44.378 CC module/sock/posix/posix.o 00:02:44.378 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:44.378 CC module/accel/error/accel_error.o 00:02:44.378 CC module/accel/error/accel_error_rpc.o 00:02:44.378 CC module/accel/iaa/accel_iaa.o 00:02:44.378 CC module/accel/iaa/accel_iaa_rpc.o 00:02:44.378 CC module/accel/ioat/accel_ioat.o 00:02:44.378 CC module/keyring/file/keyring.o 00:02:44.378 CC module/keyring/file/keyring_rpc.o 00:02:44.378 CC module/accel/ioat/accel_ioat_rpc.o 00:02:44.378 CC module/fsdev/aio/fsdev_aio.o 00:02:44.378 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:44.378 CC module/fsdev/aio/linux_aio_mgr.o 00:02:44.378 CC module/accel/dsa/accel_dsa.o 00:02:44.378 CC module/accel/dsa/accel_dsa_rpc.o 00:02:44.378 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:44.378 CC module/keyring/linux/keyring.o 00:02:44.378 CC module/keyring/linux/keyring_rpc.o 00:02:44.378 CC module/scheduler/gscheduler/gscheduler.o 00:02:44.378 SO libspdk_env_dpdk_rpc.so.6.0 00:02:44.639 SYMLINK libspdk_env_dpdk_rpc.so 00:02:44.639 LIB libspdk_scheduler_gscheduler.a 00:02:44.639 LIB libspdk_keyring_file.a 00:02:44.639 LIB libspdk_keyring_linux.a 00:02:44.639 SO libspdk_scheduler_gscheduler.so.4.0 00:02:44.639 LIB libspdk_scheduler_dpdk_governor.a 00:02:44.639 SO libspdk_keyring_file.so.2.0 00:02:44.639 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:44.639 SO libspdk_keyring_linux.so.1.0 00:02:44.639 LIB libspdk_scheduler_dynamic.a 00:02:44.639 LIB libspdk_accel_error.a 00:02:44.639 LIB libspdk_accel_iaa.a 00:02:44.639 LIB libspdk_accel_ioat.a 00:02:44.639 SO libspdk_accel_error.so.2.0 00:02:44.639 LIB libspdk_blob_bdev.a 00:02:44.639 SYMLINK libspdk_scheduler_gscheduler.so 00:02:44.900 SO libspdk_scheduler_dynamic.so.4.0 00:02:44.900 SO libspdk_accel_iaa.so.3.0 00:02:44.900 SYMLINK libspdk_keyring_file.so 00:02:44.900 SYMLINK libspdk_keyring_linux.so 00:02:44.900 SO libspdk_accel_ioat.so.6.0 00:02:44.900 SO libspdk_blob_bdev.so.11.0 00:02:44.900 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:44.900 LIB libspdk_accel_dsa.a 00:02:44.900 SYMLINK libspdk_accel_error.so 00:02:44.900 SYMLINK libspdk_scheduler_dynamic.so 00:02:44.900 SO libspdk_accel_dsa.so.5.0 00:02:44.900 SYMLINK libspdk_accel_iaa.so 00:02:44.900 SYMLINK libspdk_blob_bdev.so 00:02:44.900 SYMLINK libspdk_accel_ioat.so 00:02:44.900 SYMLINK libspdk_accel_dsa.so 00:02:44.900 LIB libspdk_vfu_device.a 00:02:44.900 SO libspdk_vfu_device.so.3.0 00:02:45.159 SYMLINK libspdk_vfu_device.so 00:02:45.160 LIB libspdk_fsdev_aio.a 00:02:45.160 SO libspdk_fsdev_aio.so.1.0 00:02:45.160 LIB libspdk_sock_posix.a 00:02:45.160 SO libspdk_sock_posix.so.6.0 00:02:45.160 SYMLINK libspdk_fsdev_aio.so 00:02:45.421 SYMLINK libspdk_sock_posix.so 00:02:45.421 CC module/bdev/lvol/vbdev_lvol.o 00:02:45.421 CC module/bdev/error/vbdev_error.o 00:02:45.421 CC module/bdev/error/vbdev_error_rpc.o 00:02:45.421 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:45.421 CC module/bdev/null/bdev_null.o 00:02:45.421 CC module/bdev/gpt/gpt.o 00:02:45.421 CC module/bdev/null/bdev_null_rpc.o 00:02:45.421 CC module/bdev/gpt/vbdev_gpt.o 00:02:45.421 CC module/bdev/delay/vbdev_delay.o 00:02:45.421 CC module/bdev/passthru/vbdev_passthru.o 00:02:45.421 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:45.421 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:45.421 CC module/blobfs/bdev/blobfs_bdev.o 00:02:45.421 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:45.421 CC module/bdev/ftl/bdev_ftl.o 00:02:45.421 CC module/bdev/split/vbdev_split.o 00:02:45.421 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:45.421 CC module/bdev/raid/bdev_raid.o 00:02:45.421 CC module/bdev/split/vbdev_split_rpc.o 00:02:45.421 CC module/bdev/raid/bdev_raid_rpc.o 00:02:45.421 CC module/bdev/malloc/bdev_malloc.o 00:02:45.421 CC module/bdev/raid/bdev_raid_sb.o 00:02:45.421 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:45.421 CC module/bdev/nvme/bdev_nvme.o 00:02:45.421 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:45.421 CC module/bdev/raid/raid0.o 00:02:45.421 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:45.421 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:45.421 CC module/bdev/raid/raid1.o 00:02:45.421 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:45.421 CC module/bdev/raid/concat.o 00:02:45.421 CC module/bdev/iscsi/bdev_iscsi.o 00:02:45.421 CC module/bdev/nvme/nvme_rpc.o 00:02:45.421 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:45.421 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:45.421 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:45.421 CC module/bdev/aio/bdev_aio.o 00:02:45.421 CC module/bdev/nvme/bdev_mdns_client.o 00:02:45.421 CC module/bdev/nvme/vbdev_opal.o 00:02:45.421 CC module/bdev/aio/bdev_aio_rpc.o 00:02:45.421 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:45.421 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:45.682 LIB libspdk_blobfs_bdev.a 00:02:45.682 LIB libspdk_bdev_split.a 00:02:45.682 SO libspdk_blobfs_bdev.so.6.0 00:02:45.682 LIB libspdk_bdev_null.a 00:02:45.682 LIB libspdk_bdev_error.a 00:02:45.682 LIB libspdk_bdev_gpt.a 00:02:45.682 SO libspdk_bdev_split.so.6.0 00:02:45.943 LIB libspdk_bdev_ftl.a 00:02:45.943 SO libspdk_bdev_null.so.6.0 00:02:45.943 SO libspdk_bdev_error.so.6.0 00:02:45.943 LIB libspdk_bdev_passthru.a 00:02:45.943 SO libspdk_bdev_gpt.so.6.0 00:02:45.943 SYMLINK libspdk_blobfs_bdev.so 00:02:45.943 SO libspdk_bdev_ftl.so.6.0 00:02:45.943 SYMLINK libspdk_bdev_split.so 00:02:45.943 SO libspdk_bdev_passthru.so.6.0 00:02:45.943 LIB libspdk_bdev_aio.a 00:02:45.943 LIB libspdk_bdev_delay.a 00:02:45.943 SYMLINK libspdk_bdev_null.so 00:02:45.943 LIB libspdk_bdev_zone_block.a 00:02:45.943 SYMLINK libspdk_bdev_error.so 00:02:45.943 LIB libspdk_bdev_iscsi.a 00:02:45.943 SYMLINK libspdk_bdev_gpt.so 00:02:45.943 LIB libspdk_bdev_malloc.a 00:02:45.943 SO libspdk_bdev_zone_block.so.6.0 00:02:45.943 SO libspdk_bdev_delay.so.6.0 00:02:45.943 SO libspdk_bdev_aio.so.6.0 00:02:45.943 SYMLINK libspdk_bdev_ftl.so 00:02:45.943 SO libspdk_bdev_iscsi.so.6.0 00:02:45.943 SYMLINK libspdk_bdev_passthru.so 00:02:45.943 SO libspdk_bdev_malloc.so.6.0 00:02:45.943 SYMLINK libspdk_bdev_zone_block.so 00:02:45.943 SYMLINK libspdk_bdev_delay.so 00:02:45.943 SYMLINK libspdk_bdev_aio.so 00:02:45.943 LIB libspdk_bdev_lvol.a 00:02:45.943 SYMLINK libspdk_bdev_iscsi.so 00:02:45.943 SYMLINK libspdk_bdev_malloc.so 00:02:45.943 LIB libspdk_bdev_virtio.a 00:02:45.943 SO libspdk_bdev_lvol.so.6.0 00:02:46.204 SO libspdk_bdev_virtio.so.6.0 00:02:46.204 SYMLINK libspdk_bdev_lvol.so 00:02:46.204 SYMLINK libspdk_bdev_virtio.so 00:02:46.464 LIB libspdk_bdev_raid.a 00:02:46.464 SO libspdk_bdev_raid.so.6.0 00:02:46.725 SYMLINK libspdk_bdev_raid.so 00:02:48.108 LIB libspdk_bdev_nvme.a 00:02:48.108 SO libspdk_bdev_nvme.so.7.1 00:02:48.108 SYMLINK libspdk_bdev_nvme.so 00:02:48.678 CC module/event/subsystems/vmd/vmd.o 00:02:48.678 CC module/event/subsystems/sock/sock.o 00:02:48.678 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:48.678 CC module/event/subsystems/iobuf/iobuf.o 00:02:48.678 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:48.678 CC module/event/subsystems/scheduler/scheduler.o 00:02:48.678 CC module/event/subsystems/keyring/keyring.o 00:02:48.678 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:48.678 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:48.678 CC module/event/subsystems/fsdev/fsdev.o 00:02:48.939 LIB libspdk_event_keyring.a 00:02:48.939 LIB libspdk_event_fsdev.a 00:02:48.939 LIB libspdk_event_sock.a 00:02:48.939 LIB libspdk_event_vhost_blk.a 00:02:48.939 LIB libspdk_event_vmd.a 00:02:48.939 LIB libspdk_event_scheduler.a 00:02:48.939 SO libspdk_event_keyring.so.1.0 00:02:48.939 LIB libspdk_event_iobuf.a 00:02:48.939 LIB libspdk_event_vfu_tgt.a 00:02:48.939 SO libspdk_event_fsdev.so.1.0 00:02:48.939 SO libspdk_event_sock.so.5.0 00:02:48.939 SO libspdk_event_scheduler.so.4.0 00:02:48.939 SO libspdk_event_vhost_blk.so.3.0 00:02:48.939 SO libspdk_event_vmd.so.6.0 00:02:48.939 SO libspdk_event_iobuf.so.3.0 00:02:48.939 SO libspdk_event_vfu_tgt.so.3.0 00:02:48.939 SYMLINK libspdk_event_keyring.so 00:02:48.939 SYMLINK libspdk_event_fsdev.so 00:02:48.939 SYMLINK libspdk_event_scheduler.so 00:02:48.939 SYMLINK libspdk_event_sock.so 00:02:48.939 SYMLINK libspdk_event_vhost_blk.so 00:02:48.939 SYMLINK libspdk_event_vmd.so 00:02:48.939 SYMLINK libspdk_event_vfu_tgt.so 00:02:48.939 SYMLINK libspdk_event_iobuf.so 00:02:49.510 CC module/event/subsystems/accel/accel.o 00:02:49.510 LIB libspdk_event_accel.a 00:02:49.510 SO libspdk_event_accel.so.6.0 00:02:49.510 SYMLINK libspdk_event_accel.so 00:02:50.080 CC module/event/subsystems/bdev/bdev.o 00:02:50.080 LIB libspdk_event_bdev.a 00:02:50.080 SO libspdk_event_bdev.so.6.0 00:02:50.405 SYMLINK libspdk_event_bdev.so 00:02:50.707 CC module/event/subsystems/scsi/scsi.o 00:02:50.707 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:50.707 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:50.707 CC module/event/subsystems/nbd/nbd.o 00:02:50.707 CC module/event/subsystems/ublk/ublk.o 00:02:50.707 LIB libspdk_event_scsi.a 00:02:50.707 LIB libspdk_event_nbd.a 00:02:50.707 LIB libspdk_event_ublk.a 00:02:50.970 SO libspdk_event_scsi.so.6.0 00:02:50.970 SO libspdk_event_nbd.so.6.0 00:02:50.970 SO libspdk_event_ublk.so.3.0 00:02:50.970 LIB libspdk_event_nvmf.a 00:02:50.970 SYMLINK libspdk_event_scsi.so 00:02:50.970 SYMLINK libspdk_event_nbd.so 00:02:50.970 SYMLINK libspdk_event_ublk.so 00:02:50.970 SO libspdk_event_nvmf.so.6.0 00:02:50.970 SYMLINK libspdk_event_nvmf.so 00:02:51.231 CC module/event/subsystems/iscsi/iscsi.o 00:02:51.231 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:51.491 LIB libspdk_event_iscsi.a 00:02:51.491 LIB libspdk_event_vhost_scsi.a 00:02:51.491 SO libspdk_event_iscsi.so.6.0 00:02:51.491 SO libspdk_event_vhost_scsi.so.3.0 00:02:51.491 SYMLINK libspdk_event_iscsi.so 00:02:51.491 SYMLINK libspdk_event_vhost_scsi.so 00:02:51.751 SO libspdk.so.6.0 00:02:51.751 SYMLINK libspdk.so 00:02:52.327 CC app/trace_record/trace_record.o 00:02:52.327 CC app/spdk_nvme_identify/identify.o 00:02:52.327 CXX app/trace/trace.o 00:02:52.327 CC test/rpc_client/rpc_client_test.o 00:02:52.327 CC app/spdk_lspci/spdk_lspci.o 00:02:52.327 CC app/spdk_top/spdk_top.o 00:02:52.327 CC app/spdk_nvme_discover/discovery_aer.o 00:02:52.327 CC app/spdk_nvme_perf/perf.o 00:02:52.327 TEST_HEADER include/spdk/accel.h 00:02:52.327 TEST_HEADER include/spdk/accel_module.h 00:02:52.327 TEST_HEADER include/spdk/assert.h 00:02:52.327 TEST_HEADER include/spdk/barrier.h 00:02:52.327 TEST_HEADER include/spdk/base64.h 00:02:52.327 TEST_HEADER include/spdk/bdev.h 00:02:52.327 TEST_HEADER include/spdk/bdev_module.h 00:02:52.327 TEST_HEADER include/spdk/bdev_zone.h 00:02:52.327 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:52.327 TEST_HEADER include/spdk/bit_array.h 00:02:52.327 TEST_HEADER include/spdk/bit_pool.h 00:02:52.327 TEST_HEADER include/spdk/blob_bdev.h 00:02:52.327 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:52.327 TEST_HEADER include/spdk/blobfs.h 00:02:52.327 TEST_HEADER include/spdk/blob.h 00:02:52.327 TEST_HEADER include/spdk/conf.h 00:02:52.327 TEST_HEADER include/spdk/config.h 00:02:52.327 TEST_HEADER include/spdk/cpuset.h 00:02:52.327 TEST_HEADER include/spdk/crc16.h 00:02:52.327 TEST_HEADER include/spdk/crc32.h 00:02:52.327 CC app/spdk_dd/spdk_dd.o 00:02:52.327 TEST_HEADER include/spdk/crc64.h 00:02:52.327 TEST_HEADER include/spdk/endian.h 00:02:52.327 TEST_HEADER include/spdk/dma.h 00:02:52.327 TEST_HEADER include/spdk/dif.h 00:02:52.327 TEST_HEADER include/spdk/env_dpdk.h 00:02:52.327 TEST_HEADER include/spdk/event.h 00:02:52.327 CC app/iscsi_tgt/iscsi_tgt.o 00:02:52.327 TEST_HEADER include/spdk/env.h 00:02:52.327 TEST_HEADER include/spdk/fd_group.h 00:02:52.327 TEST_HEADER include/spdk/fd.h 00:02:52.327 TEST_HEADER include/spdk/file.h 00:02:52.327 TEST_HEADER include/spdk/fsdev.h 00:02:52.327 TEST_HEADER include/spdk/fsdev_module.h 00:02:52.327 TEST_HEADER include/spdk/ftl.h 00:02:52.327 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:52.327 TEST_HEADER include/spdk/gpt_spec.h 00:02:52.327 TEST_HEADER include/spdk/hexlify.h 00:02:52.327 TEST_HEADER include/spdk/histogram_data.h 00:02:52.327 TEST_HEADER include/spdk/idxd.h 00:02:52.327 TEST_HEADER include/spdk/idxd_spec.h 00:02:52.327 CC app/nvmf_tgt/nvmf_main.o 00:02:52.327 TEST_HEADER include/spdk/ioat.h 00:02:52.327 TEST_HEADER include/spdk/init.h 00:02:52.327 TEST_HEADER include/spdk/ioat_spec.h 00:02:52.327 TEST_HEADER include/spdk/json.h 00:02:52.327 CC app/spdk_tgt/spdk_tgt.o 00:02:52.327 TEST_HEADER include/spdk/iscsi_spec.h 00:02:52.327 TEST_HEADER include/spdk/jsonrpc.h 00:02:52.327 TEST_HEADER include/spdk/keyring.h 00:02:52.327 TEST_HEADER include/spdk/keyring_module.h 00:02:52.327 TEST_HEADER include/spdk/likely.h 00:02:52.327 TEST_HEADER include/spdk/log.h 00:02:52.327 TEST_HEADER include/spdk/lvol.h 00:02:52.327 TEST_HEADER include/spdk/md5.h 00:02:52.327 TEST_HEADER include/spdk/mmio.h 00:02:52.327 TEST_HEADER include/spdk/memory.h 00:02:52.327 TEST_HEADER include/spdk/nbd.h 00:02:52.327 TEST_HEADER include/spdk/notify.h 00:02:52.327 TEST_HEADER include/spdk/net.h 00:02:52.327 TEST_HEADER include/spdk/nvme.h 00:02:52.327 TEST_HEADER include/spdk/nvme_intel.h 00:02:52.327 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:52.327 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:52.327 TEST_HEADER include/spdk/nvme_spec.h 00:02:52.327 TEST_HEADER include/spdk/nvme_zns.h 00:02:52.327 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:52.327 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:52.327 TEST_HEADER include/spdk/nvmf_spec.h 00:02:52.327 TEST_HEADER include/spdk/nvmf.h 00:02:52.327 TEST_HEADER include/spdk/nvmf_transport.h 00:02:52.327 TEST_HEADER include/spdk/opal_spec.h 00:02:52.327 TEST_HEADER include/spdk/opal.h 00:02:52.327 TEST_HEADER include/spdk/pci_ids.h 00:02:52.327 TEST_HEADER include/spdk/pipe.h 00:02:52.327 TEST_HEADER include/spdk/queue.h 00:02:52.327 TEST_HEADER include/spdk/reduce.h 00:02:52.327 TEST_HEADER include/spdk/rpc.h 00:02:52.327 TEST_HEADER include/spdk/scheduler.h 00:02:52.327 TEST_HEADER include/spdk/scsi.h 00:02:52.327 TEST_HEADER include/spdk/scsi_spec.h 00:02:52.327 TEST_HEADER include/spdk/sock.h 00:02:52.327 TEST_HEADER include/spdk/stdinc.h 00:02:52.327 TEST_HEADER include/spdk/string.h 00:02:52.327 TEST_HEADER include/spdk/thread.h 00:02:52.327 TEST_HEADER include/spdk/trace.h 00:02:52.327 TEST_HEADER include/spdk/trace_parser.h 00:02:52.327 TEST_HEADER include/spdk/tree.h 00:02:52.327 TEST_HEADER include/spdk/ublk.h 00:02:52.327 TEST_HEADER include/spdk/util.h 00:02:52.327 TEST_HEADER include/spdk/uuid.h 00:02:52.327 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:52.327 TEST_HEADER include/spdk/version.h 00:02:52.327 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:52.327 TEST_HEADER include/spdk/vhost.h 00:02:52.327 TEST_HEADER include/spdk/vmd.h 00:02:52.327 TEST_HEADER include/spdk/xor.h 00:02:52.327 TEST_HEADER include/spdk/zipf.h 00:02:52.327 CXX test/cpp_headers/accel.o 00:02:52.327 CXX test/cpp_headers/accel_module.o 00:02:52.327 CXX test/cpp_headers/assert.o 00:02:52.327 CXX test/cpp_headers/base64.o 00:02:52.327 CXX test/cpp_headers/barrier.o 00:02:52.327 CXX test/cpp_headers/bdev_module.o 00:02:52.327 CXX test/cpp_headers/bdev.o 00:02:52.327 CXX test/cpp_headers/bdev_zone.o 00:02:52.327 CXX test/cpp_headers/bit_array.o 00:02:52.327 CXX test/cpp_headers/bit_pool.o 00:02:52.327 CXX test/cpp_headers/blobfs_bdev.o 00:02:52.327 CXX test/cpp_headers/blob_bdev.o 00:02:52.327 CXX test/cpp_headers/blobfs.o 00:02:52.327 CXX test/cpp_headers/blob.o 00:02:52.327 CXX test/cpp_headers/conf.o 00:02:52.327 CXX test/cpp_headers/config.o 00:02:52.327 CXX test/cpp_headers/cpuset.o 00:02:52.327 CXX test/cpp_headers/crc16.o 00:02:52.327 CXX test/cpp_headers/crc64.o 00:02:52.327 CXX test/cpp_headers/crc32.o 00:02:52.327 CXX test/cpp_headers/dif.o 00:02:52.327 CXX test/cpp_headers/dma.o 00:02:52.327 CXX test/cpp_headers/env_dpdk.o 00:02:52.327 CXX test/cpp_headers/endian.o 00:02:52.327 CXX test/cpp_headers/event.o 00:02:52.327 CXX test/cpp_headers/env.o 00:02:52.327 CXX test/cpp_headers/fd_group.o 00:02:52.327 CXX test/cpp_headers/fd.o 00:02:52.327 CXX test/cpp_headers/file.o 00:02:52.327 CXX test/cpp_headers/fsdev.o 00:02:52.327 CXX test/cpp_headers/fsdev_module.o 00:02:52.327 CXX test/cpp_headers/ftl.o 00:02:52.327 CXX test/cpp_headers/gpt_spec.o 00:02:52.327 CXX test/cpp_headers/fuse_dispatcher.o 00:02:52.327 CXX test/cpp_headers/hexlify.o 00:02:52.327 CXX test/cpp_headers/histogram_data.o 00:02:52.327 CXX test/cpp_headers/idxd.o 00:02:52.327 CC examples/ioat/perf/perf.o 00:02:52.327 CXX test/cpp_headers/idxd_spec.o 00:02:52.327 CXX test/cpp_headers/ioat.o 00:02:52.327 CXX test/cpp_headers/init.o 00:02:52.327 CC examples/ioat/verify/verify.o 00:02:52.327 CXX test/cpp_headers/ioat_spec.o 00:02:52.327 CXX test/cpp_headers/iscsi_spec.o 00:02:52.327 CXX test/cpp_headers/keyring_module.o 00:02:52.327 CXX test/cpp_headers/keyring.o 00:02:52.327 CXX test/cpp_headers/json.o 00:02:52.327 CXX test/cpp_headers/jsonrpc.o 00:02:52.327 CXX test/cpp_headers/likely.o 00:02:52.327 LINK spdk_lspci 00:02:52.327 CXX test/cpp_headers/lvol.o 00:02:52.327 CXX test/cpp_headers/log.o 00:02:52.327 CC examples/util/zipf/zipf.o 00:02:52.327 CXX test/cpp_headers/memory.o 00:02:52.327 CXX test/cpp_headers/mmio.o 00:02:52.327 CXX test/cpp_headers/md5.o 00:02:52.327 CC test/env/pci/pci_ut.o 00:02:52.327 CXX test/cpp_headers/nbd.o 00:02:52.327 CXX test/cpp_headers/net.o 00:02:52.327 CXX test/cpp_headers/nvme.o 00:02:52.327 CXX test/cpp_headers/notify.o 00:02:52.327 CXX test/cpp_headers/nvme_spec.o 00:02:52.327 CXX test/cpp_headers/nvme_zns.o 00:02:52.327 CXX test/cpp_headers/nvme_ocssd.o 00:02:52.327 CXX test/cpp_headers/nvme_intel.o 00:02:52.327 CC test/app/histogram_perf/histogram_perf.o 00:02:52.327 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:52.327 CXX test/cpp_headers/nvmf_cmd.o 00:02:52.327 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:52.327 CXX test/cpp_headers/nvmf.o 00:02:52.327 CXX test/cpp_headers/opal_spec.o 00:02:52.327 CXX test/cpp_headers/nvmf_spec.o 00:02:52.327 CXX test/cpp_headers/nvmf_transport.o 00:02:52.327 CXX test/cpp_headers/pci_ids.o 00:02:52.327 CXX test/cpp_headers/opal.o 00:02:52.327 CXX test/cpp_headers/pipe.o 00:02:52.327 CXX test/cpp_headers/queue.o 00:02:52.327 CC test/app/stub/stub.o 00:02:52.327 CC test/env/vtophys/vtophys.o 00:02:52.327 CC test/thread/poller_perf/poller_perf.o 00:02:52.327 CXX test/cpp_headers/rpc.o 00:02:52.327 CXX test/cpp_headers/reduce.o 00:02:52.327 CXX test/cpp_headers/scheduler.o 00:02:52.327 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:52.327 CXX test/cpp_headers/scsi_spec.o 00:02:52.327 CXX test/cpp_headers/scsi.o 00:02:52.328 CXX test/cpp_headers/string.o 00:02:52.328 CC test/app/jsoncat/jsoncat.o 00:02:52.328 CXX test/cpp_headers/sock.o 00:02:52.328 CXX test/cpp_headers/stdinc.o 00:02:52.328 CXX test/cpp_headers/thread.o 00:02:52.597 CC test/env/memory/memory_ut.o 00:02:52.597 CXX test/cpp_headers/trace_parser.o 00:02:52.597 CXX test/cpp_headers/trace.o 00:02:52.597 CXX test/cpp_headers/tree.o 00:02:52.597 CXX test/cpp_headers/util.o 00:02:52.597 CXX test/cpp_headers/ublk.o 00:02:52.597 CXX test/cpp_headers/version.o 00:02:52.597 CXX test/cpp_headers/uuid.o 00:02:52.597 CXX test/cpp_headers/vfio_user_pci.o 00:02:52.597 CXX test/cpp_headers/vfio_user_spec.o 00:02:52.597 CC app/fio/nvme/fio_plugin.o 00:02:52.597 CXX test/cpp_headers/vhost.o 00:02:52.597 CXX test/cpp_headers/xor.o 00:02:52.597 CXX test/cpp_headers/vmd.o 00:02:52.597 CXX test/cpp_headers/zipf.o 00:02:52.597 CC app/fio/bdev/fio_plugin.o 00:02:52.597 CC test/dma/test_dma/test_dma.o 00:02:52.597 CC test/app/bdev_svc/bdev_svc.o 00:02:52.597 LINK rpc_client_test 00:02:52.597 LINK interrupt_tgt 00:02:52.597 LINK spdk_nvme_discover 00:02:52.597 LINK spdk_trace_record 00:02:52.864 LINK nvmf_tgt 00:02:52.864 LINK iscsi_tgt 00:02:52.864 LINK spdk_tgt 00:02:53.126 LINK histogram_perf 00:02:53.126 CC test/env/mem_callbacks/mem_callbacks.o 00:02:53.126 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:53.126 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:53.126 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:53.126 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:53.126 LINK zipf 00:02:53.126 LINK ioat_perf 00:02:53.387 LINK spdk_dd 00:02:53.387 LINK jsoncat 00:02:53.387 LINK vtophys 00:02:53.387 LINK spdk_trace 00:02:53.387 LINK poller_perf 00:02:53.648 LINK stub 00:02:53.648 LINK env_dpdk_post_init 00:02:53.648 LINK verify 00:02:53.648 LINK bdev_svc 00:02:53.648 LINK spdk_bdev 00:02:53.648 LINK spdk_top 00:02:53.648 LINK test_dma 00:02:53.909 LINK pci_ut 00:02:53.909 CC examples/idxd/perf/perf.o 00:02:53.909 CC examples/sock/hello_world/hello_sock.o 00:02:53.909 CC examples/vmd/lsvmd/lsvmd.o 00:02:53.909 LINK vhost_fuzz 00:02:53.909 CC examples/vmd/led/led.o 00:02:53.909 LINK nvme_fuzz 00:02:53.909 CC examples/thread/thread/thread_ex.o 00:02:53.909 CC app/vhost/vhost.o 00:02:53.909 LINK spdk_nvme 00:02:53.909 LINK mem_callbacks 00:02:53.909 LINK spdk_nvme_perf 00:02:53.909 LINK lsvmd 00:02:53.909 CC test/event/event_perf/event_perf.o 00:02:54.171 CC test/event/reactor_perf/reactor_perf.o 00:02:54.171 CC test/event/reactor/reactor.o 00:02:54.171 LINK led 00:02:54.171 CC test/event/app_repeat/app_repeat.o 00:02:54.171 CC test/event/scheduler/scheduler.o 00:02:54.171 LINK spdk_nvme_identify 00:02:54.171 LINK hello_sock 00:02:54.171 LINK thread 00:02:54.171 LINK vhost 00:02:54.171 LINK idxd_perf 00:02:54.171 LINK event_perf 00:02:54.171 LINK reactor 00:02:54.171 LINK reactor_perf 00:02:54.171 LINK app_repeat 00:02:54.432 LINK scheduler 00:02:54.432 CC test/nvme/reset/reset.o 00:02:54.432 CC test/nvme/overhead/overhead.o 00:02:54.432 CC test/nvme/sgl/sgl.o 00:02:54.432 CC test/nvme/simple_copy/simple_copy.o 00:02:54.432 CC test/nvme/aer/aer.o 00:02:54.432 CC test/nvme/boot_partition/boot_partition.o 00:02:54.432 CC test/nvme/e2edp/nvme_dp.o 00:02:54.432 CC test/nvme/reserve/reserve.o 00:02:54.432 CC test/nvme/fdp/fdp.o 00:02:54.432 CC test/nvme/connect_stress/connect_stress.o 00:02:54.432 CC test/nvme/startup/startup.o 00:02:54.432 CC test/nvme/compliance/nvme_compliance.o 00:02:54.432 CC test/nvme/err_injection/err_injection.o 00:02:54.432 CC test/nvme/fused_ordering/fused_ordering.o 00:02:54.432 CC test/nvme/cuse/cuse.o 00:02:54.432 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:54.432 CC test/accel/dif/dif.o 00:02:54.432 CC test/blobfs/mkfs/mkfs.o 00:02:54.694 CC test/lvol/esnap/esnap.o 00:02:54.694 LINK memory_ut 00:02:54.694 LINK boot_partition 00:02:54.694 LINK connect_stress 00:02:54.694 LINK startup 00:02:54.694 LINK err_injection 00:02:54.694 LINK fused_ordering 00:02:54.694 LINK mkfs 00:02:54.694 LINK reserve 00:02:54.694 LINK doorbell_aers 00:02:54.694 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:54.694 LINK simple_copy 00:02:54.694 LINK sgl 00:02:54.694 CC examples/nvme/reconnect/reconnect.o 00:02:54.694 LINK nvme_dp 00:02:54.694 LINK reset 00:02:54.694 CC examples/nvme/hotplug/hotplug.o 00:02:54.694 LINK overhead 00:02:54.694 CC examples/nvme/hello_world/hello_world.o 00:02:54.694 CC examples/nvme/abort/abort.o 00:02:54.694 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:54.694 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:54.694 CC examples/nvme/arbitration/arbitration.o 00:02:54.694 LINK aer 00:02:54.694 LINK nvme_compliance 00:02:54.694 LINK fdp 00:02:54.955 CC examples/accel/perf/accel_perf.o 00:02:54.955 CC examples/blob/cli/blobcli.o 00:02:54.955 CC examples/blob/hello_world/hello_blob.o 00:02:54.955 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:54.955 LINK pmr_persistence 00:02:54.955 LINK iscsi_fuzz 00:02:54.955 LINK cmb_copy 00:02:54.955 LINK hello_world 00:02:54.955 LINK hotplug 00:02:55.215 LINK reconnect 00:02:55.215 LINK abort 00:02:55.215 LINK dif 00:02:55.215 LINK arbitration 00:02:55.215 LINK hello_blob 00:02:55.215 LINK nvme_manage 00:02:55.215 LINK hello_fsdev 00:02:55.215 LINK accel_perf 00:02:55.476 LINK blobcli 00:02:55.737 LINK cuse 00:02:55.737 CC test/bdev/bdevio/bdevio.o 00:02:55.998 CC examples/bdev/hello_world/hello_bdev.o 00:02:55.998 CC examples/bdev/bdevperf/bdevperf.o 00:02:56.365 LINK bdevio 00:02:56.365 LINK hello_bdev 00:02:56.626 LINK bdevperf 00:02:57.196 CC examples/nvmf/nvmf/nvmf.o 00:02:57.766 LINK nvmf 00:02:59.149 LINK esnap 00:02:59.410 00:02:59.410 real 0m56.032s 00:02:59.410 user 8m6.656s 00:02:59.410 sys 5m36.671s 00:02:59.410 07:01:21 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:59.410 07:01:21 make -- common/autotest_common.sh@10 -- $ set +x 00:02:59.410 ************************************ 00:02:59.410 END TEST make 00:02:59.410 ************************************ 00:02:59.410 07:01:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:59.410 07:01:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:59.410 07:01:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:59.410 07:01:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.410 07:01:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:59.410 07:01:21 -- pm/common@44 -- $ pid=3189429 00:02:59.410 07:01:21 -- pm/common@50 -- $ kill -TERM 3189429 00:02:59.410 07:01:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.410 07:01:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:59.410 07:01:21 -- pm/common@44 -- $ pid=3189430 00:02:59.410 07:01:21 -- pm/common@50 -- $ kill -TERM 3189430 00:02:59.410 07:01:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.410 07:01:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:59.410 07:01:21 -- pm/common@44 -- $ pid=3189432 00:02:59.410 07:01:21 -- pm/common@50 -- $ kill -TERM 3189432 00:02:59.410 07:01:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.410 07:01:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:59.410 07:01:21 -- pm/common@44 -- $ pid=3189455 00:02:59.410 07:01:21 -- pm/common@50 -- $ sudo -E kill -TERM 3189455 00:02:59.410 07:01:21 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:59.410 07:01:21 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:59.410 07:01:21 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:59.410 07:01:21 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:59.410 07:01:21 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:59.671 07:01:21 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:59.671 07:01:21 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:59.671 07:01:21 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:59.671 07:01:21 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:59.671 07:01:21 -- scripts/common.sh@336 -- # IFS=.-: 00:02:59.671 07:01:21 -- scripts/common.sh@336 -- # read -ra ver1 00:02:59.671 07:01:21 -- scripts/common.sh@337 -- # IFS=.-: 00:02:59.671 07:01:21 -- scripts/common.sh@337 -- # read -ra ver2 00:02:59.671 07:01:21 -- scripts/common.sh@338 -- # local 'op=<' 00:02:59.671 07:01:21 -- scripts/common.sh@340 -- # ver1_l=2 00:02:59.671 07:01:21 -- scripts/common.sh@341 -- # ver2_l=1 00:02:59.671 07:01:21 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:59.671 07:01:21 -- scripts/common.sh@344 -- # case "$op" in 00:02:59.671 07:01:21 -- scripts/common.sh@345 -- # : 1 00:02:59.671 07:01:21 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:59.671 07:01:21 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:59.671 07:01:21 -- scripts/common.sh@365 -- # decimal 1 00:02:59.671 07:01:21 -- scripts/common.sh@353 -- # local d=1 00:02:59.671 07:01:21 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:59.671 07:01:21 -- scripts/common.sh@355 -- # echo 1 00:02:59.671 07:01:21 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:59.671 07:01:21 -- scripts/common.sh@366 -- # decimal 2 00:02:59.671 07:01:21 -- scripts/common.sh@353 -- # local d=2 00:02:59.671 07:01:21 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:59.671 07:01:21 -- scripts/common.sh@355 -- # echo 2 00:02:59.671 07:01:21 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:59.671 07:01:21 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:59.671 07:01:21 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:59.671 07:01:21 -- scripts/common.sh@368 -- # return 0 00:02:59.671 07:01:21 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:59.671 07:01:21 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:59.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.671 --rc genhtml_branch_coverage=1 00:02:59.671 --rc genhtml_function_coverage=1 00:02:59.671 --rc genhtml_legend=1 00:02:59.671 --rc geninfo_all_blocks=1 00:02:59.671 --rc geninfo_unexecuted_blocks=1 00:02:59.671 00:02:59.671 ' 00:02:59.671 07:01:21 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:59.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.671 --rc genhtml_branch_coverage=1 00:02:59.671 --rc genhtml_function_coverage=1 00:02:59.671 --rc genhtml_legend=1 00:02:59.671 --rc geninfo_all_blocks=1 00:02:59.671 --rc geninfo_unexecuted_blocks=1 00:02:59.671 00:02:59.671 ' 00:02:59.671 07:01:21 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:59.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.671 --rc genhtml_branch_coverage=1 00:02:59.671 --rc genhtml_function_coverage=1 00:02:59.671 --rc genhtml_legend=1 00:02:59.671 --rc geninfo_all_blocks=1 00:02:59.671 --rc geninfo_unexecuted_blocks=1 00:02:59.671 00:02:59.671 ' 00:02:59.671 07:01:21 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:59.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.671 --rc genhtml_branch_coverage=1 00:02:59.671 --rc genhtml_function_coverage=1 00:02:59.671 --rc genhtml_legend=1 00:02:59.671 --rc geninfo_all_blocks=1 00:02:59.671 --rc geninfo_unexecuted_blocks=1 00:02:59.671 00:02:59.671 ' 00:02:59.671 07:01:21 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:59.671 07:01:21 -- nvmf/common.sh@7 -- # uname -s 00:02:59.671 07:01:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:59.671 07:01:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:59.671 07:01:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:59.671 07:01:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:59.671 07:01:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:59.671 07:01:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:59.671 07:01:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:59.671 07:01:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:59.671 07:01:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:59.671 07:01:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:59.671 07:01:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:59.671 07:01:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:59.671 07:01:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:59.671 07:01:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:59.671 07:01:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:59.671 07:01:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:59.671 07:01:21 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:59.671 07:01:21 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:59.671 07:01:21 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:59.671 07:01:21 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:59.671 07:01:21 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:59.671 07:01:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.671 07:01:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.671 07:01:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.671 07:01:21 -- paths/export.sh@5 -- # export PATH 00:02:59.671 07:01:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.671 07:01:21 -- nvmf/common.sh@51 -- # : 0 00:02:59.671 07:01:21 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:59.671 07:01:21 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:59.671 07:01:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:59.671 07:01:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:59.671 07:01:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:59.671 07:01:21 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:59.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:59.671 07:01:21 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:59.671 07:01:21 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:59.671 07:01:21 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:59.671 07:01:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:59.671 07:01:21 -- spdk/autotest.sh@32 -- # uname -s 00:02:59.671 07:01:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:59.671 07:01:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:59.671 07:01:21 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:59.671 07:01:21 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:59.671 07:01:21 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:59.671 07:01:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:59.671 07:01:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:59.671 07:01:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:59.671 07:01:21 -- spdk/autotest.sh@48 -- # udevadm_pid=3255573 00:02:59.671 07:01:21 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:59.671 07:01:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:59.671 07:01:21 -- pm/common@17 -- # local monitor 00:02:59.671 07:01:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.671 07:01:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.671 07:01:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.671 07:01:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.671 07:01:21 -- pm/common@21 -- # date +%s 00:02:59.671 07:01:21 -- pm/common@21 -- # date +%s 00:02:59.671 07:01:21 -- pm/common@25 -- # sleep 1 00:02:59.671 07:01:21 -- pm/common@21 -- # date +%s 00:02:59.671 07:01:21 -- pm/common@21 -- # date +%s 00:02:59.671 07:01:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082481 00:02:59.671 07:01:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082481 00:02:59.671 07:01:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082481 00:02:59.671 07:01:21 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082481 00:02:59.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082481_collect-cpu-load.pm.log 00:02:59.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082481_collect-vmstat.pm.log 00:02:59.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082481_collect-cpu-temp.pm.log 00:02:59.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082481_collect-bmc-pm.bmc.pm.log 00:03:00.614 07:01:22 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:00.614 07:01:22 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:00.614 07:01:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:00.614 07:01:22 -- common/autotest_common.sh@10 -- # set +x 00:03:00.614 07:01:22 -- spdk/autotest.sh@59 -- # create_test_list 00:03:00.614 07:01:22 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:00.614 07:01:22 -- common/autotest_common.sh@10 -- # set +x 00:03:00.614 07:01:22 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:00.614 07:01:22 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.614 07:01:22 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.614 07:01:22 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:00.614 07:01:22 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.614 07:01:22 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:00.614 07:01:22 -- common/autotest_common.sh@1455 -- # uname 00:03:00.614 07:01:22 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:00.614 07:01:22 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:00.874 07:01:22 -- common/autotest_common.sh@1475 -- # uname 00:03:00.874 07:01:22 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:00.874 07:01:22 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:00.874 07:01:22 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:00.874 lcov: LCOV version 1.15 00:03:00.874 07:01:22 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:15.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:15.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:33.892 07:01:53 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:33.892 07:01:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:33.892 07:01:53 -- common/autotest_common.sh@10 -- # set +x 00:03:33.892 07:01:53 -- spdk/autotest.sh@78 -- # rm -f 00:03:33.892 07:01:53 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.464 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:34.724 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:34.724 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:34.724 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:34.724 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:34.724 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:34.724 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:34.724 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:34.724 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:34.724 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:34.724 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:34.984 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:34.984 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:34.984 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:34.984 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:34.984 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:34.984 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:35.245 07:01:57 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:35.245 07:01:57 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:35.245 07:01:57 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:35.245 07:01:57 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:35.245 07:01:57 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:35.245 07:01:57 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:35.245 07:01:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:35.245 07:01:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:35.245 07:01:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:35.245 07:01:57 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:35.245 07:01:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:35.245 07:01:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:35.245 07:01:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:35.245 07:01:57 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:35.245 07:01:57 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:35.245 No valid GPT data, bailing 00:03:35.245 07:01:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:35.245 07:01:57 -- scripts/common.sh@394 -- # pt= 00:03:35.245 07:01:57 -- scripts/common.sh@395 -- # return 1 00:03:35.245 07:01:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:35.245 1+0 records in 00:03:35.245 1+0 records out 00:03:35.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048015 s, 218 MB/s 00:03:35.245 07:01:57 -- spdk/autotest.sh@105 -- # sync 00:03:35.245 07:01:57 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:35.245 07:01:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:35.245 07:01:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:45.255 07:02:06 -- spdk/autotest.sh@111 -- # uname -s 00:03:45.255 07:02:06 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:45.255 07:02:06 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:45.255 07:02:06 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:47.797 Hugepages 00:03:47.797 node hugesize free / total 00:03:47.797 node0 1048576kB 0 / 0 00:03:47.797 node0 2048kB 0 / 0 00:03:47.797 node1 1048576kB 0 / 0 00:03:47.797 node1 2048kB 0 / 0 00:03:47.797 00:03:47.797 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:47.797 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:47.797 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:47.798 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:47.798 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:47.798 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:47.798 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:47.798 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:47.798 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:47.798 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:47.798 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:47.798 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:47.798 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:47.798 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:47.798 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:47.798 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:47.798 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:47.798 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:47.798 07:02:09 -- spdk/autotest.sh@117 -- # uname -s 00:03:47.798 07:02:09 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:47.798 07:02:09 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:47.798 07:02:09 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.098 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:51.098 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:51.098 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:51.098 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:51.098 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:51.098 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:51.098 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:51.098 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:51.358 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:51.358 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:51.358 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:51.358 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:51.358 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:51.358 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:51.358 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:51.358 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:53.270 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:53.531 07:02:15 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:54.472 07:02:16 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:54.472 07:02:16 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:54.472 07:02:16 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:54.472 07:02:16 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:54.472 07:02:16 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:54.472 07:02:16 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:54.472 07:02:16 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:54.472 07:02:16 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:54.472 07:02:16 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:54.472 07:02:16 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:54.472 07:02:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:54.472 07:02:16 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.774 Waiting for block devices as requested 00:03:58.035 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:58.035 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:58.035 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:58.296 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:58.296 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:58.296 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:58.556 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:58.556 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:58.556 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:58.817 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:58.817 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:59.078 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:59.078 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:59.078 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:59.338 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:59.338 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:59.338 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:59.598 07:02:21 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:59.598 07:02:21 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:59.598 07:02:21 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:59.598 07:02:21 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:59.598 07:02:21 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:59.598 07:02:21 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:59.598 07:02:21 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:59.598 07:02:21 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:59.598 07:02:21 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:59.598 07:02:21 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:59.598 07:02:21 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:59.598 07:02:21 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:59.598 07:02:21 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:59.598 07:02:21 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:59.598 07:02:21 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:59.598 07:02:21 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:59.598 07:02:21 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:59.598 07:02:21 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:59.598 07:02:21 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:59.858 07:02:21 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:59.858 07:02:21 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:59.858 07:02:21 -- common/autotest_common.sh@1541 -- # continue 00:03:59.858 07:02:21 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:59.858 07:02:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:59.859 07:02:21 -- common/autotest_common.sh@10 -- # set +x 00:03:59.859 07:02:21 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:59.859 07:02:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.859 07:02:21 -- common/autotest_common.sh@10 -- # set +x 00:03:59.859 07:02:21 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.159 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:03.159 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:03.159 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:03.159 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:03.419 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:03.419 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:03.419 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:03.419 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:03.419 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:03.419 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:03.419 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:03.419 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:03.419 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:03.419 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:03.419 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:03.419 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:03.419 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:03.990 07:02:25 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:03.990 07:02:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.990 07:02:25 -- common/autotest_common.sh@10 -- # set +x 00:04:03.990 07:02:26 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:03.990 07:02:26 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:03.990 07:02:26 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.990 07:02:26 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:03.990 07:02:26 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:03.990 07:02:26 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:03.990 07:02:26 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:03.990 07:02:26 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:03.990 07:02:26 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:03.990 07:02:26 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:03.990 07:02:26 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.990 07:02:26 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:03.990 07:02:26 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:03.990 07:02:26 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:03.990 07:02:26 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:03.990 07:02:26 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:03.990 07:02:26 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:03.990 07:02:26 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:04:03.990 07:02:26 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:03.990 07:02:26 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:03.990 07:02:26 -- common/autotest_common.sh@1570 -- # return 0 00:04:03.990 07:02:26 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:03.990 07:02:26 -- common/autotest_common.sh@1578 -- # return 0 00:04:03.990 07:02:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:03.990 07:02:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:03.990 07:02:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.990 07:02:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.990 07:02:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:03.990 07:02:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.990 07:02:26 -- common/autotest_common.sh@10 -- # set +x 00:04:03.990 07:02:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:03.990 07:02:26 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:03.990 07:02:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:03.990 07:02:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:03.990 07:02:26 -- common/autotest_common.sh@10 -- # set +x 00:04:03.990 ************************************ 00:04:03.990 START TEST env 00:04:03.990 ************************************ 00:04:03.990 07:02:26 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:04.251 * Looking for test storage... 00:04:04.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:04.251 07:02:26 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:04.251 07:02:26 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:04.251 07:02:26 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:04.251 07:02:26 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:04.251 07:02:26 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.251 07:02:26 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.251 07:02:26 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.251 07:02:26 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.251 07:02:26 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.251 07:02:26 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.251 07:02:26 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.251 07:02:26 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.251 07:02:26 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.251 07:02:26 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.251 07:02:26 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.251 07:02:26 env -- scripts/common.sh@344 -- # case "$op" in 00:04:04.251 07:02:26 env -- scripts/common.sh@345 -- # : 1 00:04:04.251 07:02:26 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.251 07:02:26 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.251 07:02:26 env -- scripts/common.sh@365 -- # decimal 1 00:04:04.251 07:02:26 env -- scripts/common.sh@353 -- # local d=1 00:04:04.251 07:02:26 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.251 07:02:26 env -- scripts/common.sh@355 -- # echo 1 00:04:04.251 07:02:26 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.251 07:02:26 env -- scripts/common.sh@366 -- # decimal 2 00:04:04.251 07:02:26 env -- scripts/common.sh@353 -- # local d=2 00:04:04.251 07:02:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.251 07:02:26 env -- scripts/common.sh@355 -- # echo 2 00:04:04.251 07:02:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.251 07:02:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.251 07:02:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.251 07:02:26 env -- scripts/common.sh@368 -- # return 0 00:04:04.251 07:02:26 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.251 07:02:26 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:04.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.251 --rc genhtml_branch_coverage=1 00:04:04.251 --rc genhtml_function_coverage=1 00:04:04.251 --rc genhtml_legend=1 00:04:04.251 --rc geninfo_all_blocks=1 00:04:04.251 --rc geninfo_unexecuted_blocks=1 00:04:04.251 00:04:04.251 ' 00:04:04.251 07:02:26 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:04.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.251 --rc genhtml_branch_coverage=1 00:04:04.251 --rc genhtml_function_coverage=1 00:04:04.251 --rc genhtml_legend=1 00:04:04.251 --rc geninfo_all_blocks=1 00:04:04.251 --rc geninfo_unexecuted_blocks=1 00:04:04.251 00:04:04.251 ' 00:04:04.251 07:02:26 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:04.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.251 --rc genhtml_branch_coverage=1 00:04:04.251 --rc genhtml_function_coverage=1 00:04:04.251 --rc genhtml_legend=1 00:04:04.251 --rc geninfo_all_blocks=1 00:04:04.251 --rc geninfo_unexecuted_blocks=1 00:04:04.251 00:04:04.251 ' 00:04:04.251 07:02:26 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:04.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.251 --rc genhtml_branch_coverage=1 00:04:04.251 --rc genhtml_function_coverage=1 00:04:04.251 --rc genhtml_legend=1 00:04:04.251 --rc geninfo_all_blocks=1 00:04:04.251 --rc geninfo_unexecuted_blocks=1 00:04:04.251 00:04:04.251 ' 00:04:04.251 07:02:26 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:04.251 07:02:26 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:04.251 07:02:26 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:04.251 07:02:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.251 ************************************ 00:04:04.251 START TEST env_memory 00:04:04.251 ************************************ 00:04:04.251 07:02:26 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:04.251 00:04:04.251 00:04:04.251 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.251 http://cunit.sourceforge.net/ 00:04:04.251 00:04:04.251 00:04:04.251 Suite: memory 00:04:04.251 Test: alloc and free memory map ...[2024-11-20 07:02:26.470132] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:04.251 passed 00:04:04.251 Test: mem map translation ...[2024-11-20 07:02:26.495563] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:04.251 [2024-11-20 07:02:26.495589] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:04.251 [2024-11-20 07:02:26.495637] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:04.251 [2024-11-20 07:02:26.495645] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:04.512 passed 00:04:04.512 Test: mem map registration ...[2024-11-20 07:02:26.550790] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:04.512 [2024-11-20 07:02:26.550810] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:04.512 passed 00:04:04.512 Test: mem map adjacent registrations ...passed 00:04:04.512 00:04:04.512 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.512 suites 1 1 n/a 0 0 00:04:04.512 tests 4 4 4 0 0 00:04:04.512 asserts 152 152 152 0 n/a 00:04:04.512 00:04:04.512 Elapsed time = 0.194 seconds 00:04:04.512 00:04:04.512 real 0m0.208s 00:04:04.512 user 0m0.198s 00:04:04.512 sys 0m0.010s 00:04:04.512 07:02:26 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:04.512 07:02:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:04.512 ************************************ 00:04:04.512 END TEST env_memory 00:04:04.512 ************************************ 00:04:04.512 07:02:26 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:04.513 07:02:26 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:04.513 07:02:26 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:04.513 07:02:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.513 ************************************ 00:04:04.513 START TEST env_vtophys 00:04:04.513 ************************************ 00:04:04.513 07:02:26 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:04.513 EAL: lib.eal log level changed from notice to debug 00:04:04.513 EAL: Detected lcore 0 as core 0 on socket 0 00:04:04.513 EAL: Detected lcore 1 as core 1 on socket 0 00:04:04.513 EAL: Detected lcore 2 as core 2 on socket 0 00:04:04.513 EAL: Detected lcore 3 as core 3 on socket 0 00:04:04.513 EAL: Detected lcore 4 as core 4 on socket 0 00:04:04.513 EAL: Detected lcore 5 as core 5 on socket 0 00:04:04.513 EAL: Detected lcore 6 as core 6 on socket 0 00:04:04.513 EAL: Detected lcore 7 as core 7 on socket 0 00:04:04.513 EAL: Detected lcore 8 as core 8 on socket 0 00:04:04.513 EAL: Detected lcore 9 as core 9 on socket 0 00:04:04.513 EAL: Detected lcore 10 as core 10 on socket 0 00:04:04.513 EAL: Detected lcore 11 as core 11 on socket 0 00:04:04.513 EAL: Detected lcore 12 as core 12 on socket 0 00:04:04.513 EAL: Detected lcore 13 as core 13 on socket 0 00:04:04.513 EAL: Detected lcore 14 as core 14 on socket 0 00:04:04.513 EAL: Detected lcore 15 as core 15 on socket 0 00:04:04.513 EAL: Detected lcore 16 as core 16 on socket 0 00:04:04.513 EAL: Detected lcore 17 as core 17 on socket 0 00:04:04.513 EAL: Detected lcore 18 as core 18 on socket 0 00:04:04.513 EAL: Detected lcore 19 as core 19 on socket 0 00:04:04.513 EAL: Detected lcore 20 as core 20 on socket 0 00:04:04.513 EAL: Detected lcore 21 as core 21 on socket 0 00:04:04.513 EAL: Detected lcore 22 as core 22 on socket 0 00:04:04.513 EAL: Detected lcore 23 as core 23 on socket 0 00:04:04.513 EAL: Detected lcore 24 as core 24 on socket 0 00:04:04.513 EAL: Detected lcore 25 as core 25 on socket 0 00:04:04.513 EAL: Detected lcore 26 as core 26 on socket 0 00:04:04.513 EAL: Detected lcore 27 as core 27 on socket 0 00:04:04.513 EAL: Detected lcore 28 as core 28 on socket 0 00:04:04.513 EAL: Detected lcore 29 as core 29 on socket 0 00:04:04.513 EAL: Detected lcore 30 as core 30 on socket 0 00:04:04.513 EAL: Detected lcore 31 as core 31 on socket 0 00:04:04.513 EAL: Detected lcore 32 as core 32 on socket 0 00:04:04.513 EAL: Detected lcore 33 as core 33 on socket 0 00:04:04.513 EAL: Detected lcore 34 as core 34 on socket 0 00:04:04.513 EAL: Detected lcore 35 as core 35 on socket 0 00:04:04.513 EAL: Detected lcore 36 as core 0 on socket 1 00:04:04.513 EAL: Detected lcore 37 as core 1 on socket 1 00:04:04.513 EAL: Detected lcore 38 as core 2 on socket 1 00:04:04.513 EAL: Detected lcore 39 as core 3 on socket 1 00:04:04.513 EAL: Detected lcore 40 as core 4 on socket 1 00:04:04.513 EAL: Detected lcore 41 as core 5 on socket 1 00:04:04.513 EAL: Detected lcore 42 as core 6 on socket 1 00:04:04.513 EAL: Detected lcore 43 as core 7 on socket 1 00:04:04.513 EAL: Detected lcore 44 as core 8 on socket 1 00:04:04.513 EAL: Detected lcore 45 as core 9 on socket 1 00:04:04.513 EAL: Detected lcore 46 as core 10 on socket 1 00:04:04.513 EAL: Detected lcore 47 as core 11 on socket 1 00:04:04.513 EAL: Detected lcore 48 as core 12 on socket 1 00:04:04.513 EAL: Detected lcore 49 as core 13 on socket 1 00:04:04.513 EAL: Detected lcore 50 as core 14 on socket 1 00:04:04.513 EAL: Detected lcore 51 as core 15 on socket 1 00:04:04.513 EAL: Detected lcore 52 as core 16 on socket 1 00:04:04.513 EAL: Detected lcore 53 as core 17 on socket 1 00:04:04.513 EAL: Detected lcore 54 as core 18 on socket 1 00:04:04.513 EAL: Detected lcore 55 as core 19 on socket 1 00:04:04.513 EAL: Detected lcore 56 as core 20 on socket 1 00:04:04.513 EAL: Detected lcore 57 as core 21 on socket 1 00:04:04.513 EAL: Detected lcore 58 as core 22 on socket 1 00:04:04.513 EAL: Detected lcore 59 as core 23 on socket 1 00:04:04.513 EAL: Detected lcore 60 as core 24 on socket 1 00:04:04.513 EAL: Detected lcore 61 as core 25 on socket 1 00:04:04.513 EAL: Detected lcore 62 as core 26 on socket 1 00:04:04.513 EAL: Detected lcore 63 as core 27 on socket 1 00:04:04.513 EAL: Detected lcore 64 as core 28 on socket 1 00:04:04.513 EAL: Detected lcore 65 as core 29 on socket 1 00:04:04.513 EAL: Detected lcore 66 as core 30 on socket 1 00:04:04.513 EAL: Detected lcore 67 as core 31 on socket 1 00:04:04.513 EAL: Detected lcore 68 as core 32 on socket 1 00:04:04.513 EAL: Detected lcore 69 as core 33 on socket 1 00:04:04.513 EAL: Detected lcore 70 as core 34 on socket 1 00:04:04.513 EAL: Detected lcore 71 as core 35 on socket 1 00:04:04.513 EAL: Detected lcore 72 as core 0 on socket 0 00:04:04.513 EAL: Detected lcore 73 as core 1 on socket 0 00:04:04.513 EAL: Detected lcore 74 as core 2 on socket 0 00:04:04.513 EAL: Detected lcore 75 as core 3 on socket 0 00:04:04.513 EAL: Detected lcore 76 as core 4 on socket 0 00:04:04.513 EAL: Detected lcore 77 as core 5 on socket 0 00:04:04.513 EAL: Detected lcore 78 as core 6 on socket 0 00:04:04.513 EAL: Detected lcore 79 as core 7 on socket 0 00:04:04.513 EAL: Detected lcore 80 as core 8 on socket 0 00:04:04.513 EAL: Detected lcore 81 as core 9 on socket 0 00:04:04.513 EAL: Detected lcore 82 as core 10 on socket 0 00:04:04.513 EAL: Detected lcore 83 as core 11 on socket 0 00:04:04.513 EAL: Detected lcore 84 as core 12 on socket 0 00:04:04.513 EAL: Detected lcore 85 as core 13 on socket 0 00:04:04.513 EAL: Detected lcore 86 as core 14 on socket 0 00:04:04.513 EAL: Detected lcore 87 as core 15 on socket 0 00:04:04.513 EAL: Detected lcore 88 as core 16 on socket 0 00:04:04.513 EAL: Detected lcore 89 as core 17 on socket 0 00:04:04.513 EAL: Detected lcore 90 as core 18 on socket 0 00:04:04.513 EAL: Detected lcore 91 as core 19 on socket 0 00:04:04.513 EAL: Detected lcore 92 as core 20 on socket 0 00:04:04.513 EAL: Detected lcore 93 as core 21 on socket 0 00:04:04.513 EAL: Detected lcore 94 as core 22 on socket 0 00:04:04.513 EAL: Detected lcore 95 as core 23 on socket 0 00:04:04.513 EAL: Detected lcore 96 as core 24 on socket 0 00:04:04.513 EAL: Detected lcore 97 as core 25 on socket 0 00:04:04.513 EAL: Detected lcore 98 as core 26 on socket 0 00:04:04.513 EAL: Detected lcore 99 as core 27 on socket 0 00:04:04.513 EAL: Detected lcore 100 as core 28 on socket 0 00:04:04.513 EAL: Detected lcore 101 as core 29 on socket 0 00:04:04.513 EAL: Detected lcore 102 as core 30 on socket 0 00:04:04.513 EAL: Detected lcore 103 as core 31 on socket 0 00:04:04.513 EAL: Detected lcore 104 as core 32 on socket 0 00:04:04.513 EAL: Detected lcore 105 as core 33 on socket 0 00:04:04.513 EAL: Detected lcore 106 as core 34 on socket 0 00:04:04.513 EAL: Detected lcore 107 as core 35 on socket 0 00:04:04.513 EAL: Detected lcore 108 as core 0 on socket 1 00:04:04.513 EAL: Detected lcore 109 as core 1 on socket 1 00:04:04.513 EAL: Detected lcore 110 as core 2 on socket 1 00:04:04.513 EAL: Detected lcore 111 as core 3 on socket 1 00:04:04.513 EAL: Detected lcore 112 as core 4 on socket 1 00:04:04.513 EAL: Detected lcore 113 as core 5 on socket 1 00:04:04.513 EAL: Detected lcore 114 as core 6 on socket 1 00:04:04.513 EAL: Detected lcore 115 as core 7 on socket 1 00:04:04.513 EAL: Detected lcore 116 as core 8 on socket 1 00:04:04.513 EAL: Detected lcore 117 as core 9 on socket 1 00:04:04.513 EAL: Detected lcore 118 as core 10 on socket 1 00:04:04.513 EAL: Detected lcore 119 as core 11 on socket 1 00:04:04.513 EAL: Detected lcore 120 as core 12 on socket 1 00:04:04.513 EAL: Detected lcore 121 as core 13 on socket 1 00:04:04.513 EAL: Detected lcore 122 as core 14 on socket 1 00:04:04.513 EAL: Detected lcore 123 as core 15 on socket 1 00:04:04.513 EAL: Detected lcore 124 as core 16 on socket 1 00:04:04.513 EAL: Detected lcore 125 as core 17 on socket 1 00:04:04.513 EAL: Detected lcore 126 as core 18 on socket 1 00:04:04.513 EAL: Detected lcore 127 as core 19 on socket 1 00:04:04.513 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:04.513 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:04.513 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:04.513 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:04.513 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:04.513 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:04.513 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:04.513 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:04.513 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:04.513 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:04.513 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:04.513 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:04.513 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:04.513 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:04.513 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:04.513 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:04.513 EAL: Maximum logical cores by configuration: 128 00:04:04.513 EAL: Detected CPU lcores: 128 00:04:04.513 EAL: Detected NUMA nodes: 2 00:04:04.513 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:04.513 EAL: Detected shared linkage of DPDK 00:04:04.513 EAL: No shared files mode enabled, IPC will be disabled 00:04:04.513 EAL: Bus pci wants IOVA as 'DC' 00:04:04.513 EAL: Buses did not request a specific IOVA mode. 00:04:04.513 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:04.513 EAL: Selected IOVA mode 'VA' 00:04:04.513 EAL: Probing VFIO support... 00:04:04.513 EAL: IOMMU type 1 (Type 1) is supported 00:04:04.513 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:04.513 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:04.513 EAL: VFIO support initialized 00:04:04.513 EAL: Ask a virtual area of 0x2e000 bytes 00:04:04.513 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:04.513 EAL: Setting up physically contiguous memory... 00:04:04.513 EAL: Setting maximum number of open files to 524288 00:04:04.513 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:04.513 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:04.513 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:04.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.513 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:04.513 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.513 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:04.513 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:04.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.514 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:04.514 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.514 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.514 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:04.514 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:04.514 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.514 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:04.514 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.514 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.514 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:04.514 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:04.514 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.514 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:04.514 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.514 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.514 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:04.514 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:04.514 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:04.514 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.514 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:04.514 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.514 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.514 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:04.514 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:04.514 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.514 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:04.514 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.514 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.514 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:04.514 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:04.514 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.514 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:04.514 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.514 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.514 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:04.514 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:04.514 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.514 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:04.514 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:04.514 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.514 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:04.514 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:04.514 EAL: Hugepages will be freed exactly as allocated. 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: TSC frequency is ~2400000 KHz 00:04:04.514 EAL: Main lcore 0 is ready (tid=7f932587ea00;cpuset=[0]) 00:04:04.514 EAL: Trying to obtain current memory policy. 00:04:04.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.514 EAL: Restoring previous memory policy: 0 00:04:04.514 EAL: request: mp_malloc_sync 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.514 EAL: Heap on socket 0 was expanded by 2MB 00:04:04.514 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:04.774 EAL: Mem event callback 'spdk:(nil)' registered 00:04:04.774 00:04:04.774 00:04:04.774 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.774 http://cunit.sourceforge.net/ 00:04:04.774 00:04:04.774 00:04:04.774 Suite: components_suite 00:04:04.774 Test: vtophys_malloc_test ...passed 00:04:04.774 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:04.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.774 EAL: Restoring previous memory policy: 4 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was expanded by 4MB 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was shrunk by 4MB 00:04:04.774 EAL: Trying to obtain current memory policy. 00:04:04.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.774 EAL: Restoring previous memory policy: 4 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was expanded by 6MB 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was shrunk by 6MB 00:04:04.774 EAL: Trying to obtain current memory policy. 00:04:04.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.774 EAL: Restoring previous memory policy: 4 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was expanded by 10MB 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was shrunk by 10MB 00:04:04.774 EAL: Trying to obtain current memory policy. 00:04:04.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.774 EAL: Restoring previous memory policy: 4 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was expanded by 18MB 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was shrunk by 18MB 00:04:04.774 EAL: Trying to obtain current memory policy. 00:04:04.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.774 EAL: Restoring previous memory policy: 4 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was expanded by 34MB 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.774 EAL: Trying to obtain current memory policy. 00:04:04.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.774 EAL: Restoring previous memory policy: 4 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was expanded by 66MB 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was shrunk by 66MB 00:04:04.774 EAL: Trying to obtain current memory policy. 00:04:04.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.774 EAL: Restoring previous memory policy: 4 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was expanded by 130MB 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was shrunk by 130MB 00:04:04.774 EAL: Trying to obtain current memory policy. 00:04:04.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.774 EAL: Restoring previous memory policy: 4 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was expanded by 258MB 00:04:04.774 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.774 EAL: request: mp_malloc_sync 00:04:04.774 EAL: No shared files mode enabled, IPC is disabled 00:04:04.774 EAL: Heap on socket 0 was shrunk by 258MB 00:04:04.774 EAL: Trying to obtain current memory policy. 00:04:04.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.034 EAL: Restoring previous memory policy: 4 00:04:05.034 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.034 EAL: request: mp_malloc_sync 00:04:05.034 EAL: No shared files mode enabled, IPC is disabled 00:04:05.034 EAL: Heap on socket 0 was expanded by 514MB 00:04:05.034 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.034 EAL: request: mp_malloc_sync 00:04:05.034 EAL: No shared files mode enabled, IPC is disabled 00:04:05.034 EAL: Heap on socket 0 was shrunk by 514MB 00:04:05.034 EAL: Trying to obtain current memory policy. 00:04:05.034 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.294 EAL: Restoring previous memory policy: 4 00:04:05.294 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.294 EAL: request: mp_malloc_sync 00:04:05.294 EAL: No shared files mode enabled, IPC is disabled 00:04:05.294 EAL: Heap on socket 0 was expanded by 1026MB 00:04:05.294 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.294 EAL: request: mp_malloc_sync 00:04:05.294 EAL: No shared files mode enabled, IPC is disabled 00:04:05.294 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:05.294 passed 00:04:05.294 00:04:05.294 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.294 suites 1 1 n/a 0 0 00:04:05.294 tests 2 2 2 0 0 00:04:05.294 asserts 497 497 497 0 n/a 00:04:05.294 00:04:05.294 Elapsed time = 0.689 seconds 00:04:05.294 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.294 EAL: request: mp_malloc_sync 00:04:05.294 EAL: No shared files mode enabled, IPC is disabled 00:04:05.294 EAL: Heap on socket 0 was shrunk by 2MB 00:04:05.294 EAL: No shared files mode enabled, IPC is disabled 00:04:05.294 EAL: No shared files mode enabled, IPC is disabled 00:04:05.294 EAL: No shared files mode enabled, IPC is disabled 00:04:05.294 00:04:05.294 real 0m0.842s 00:04:05.294 user 0m0.446s 00:04:05.294 sys 0m0.364s 00:04:05.294 07:02:27 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:05.294 07:02:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:05.294 ************************************ 00:04:05.294 END TEST env_vtophys 00:04:05.294 ************************************ 00:04:05.555 07:02:27 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:05.555 07:02:27 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:05.555 07:02:27 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:05.555 07:02:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.555 ************************************ 00:04:05.555 START TEST env_pci 00:04:05.555 ************************************ 00:04:05.555 07:02:27 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:05.555 00:04:05.555 00:04:05.555 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.555 http://cunit.sourceforge.net/ 00:04:05.555 00:04:05.555 00:04:05.555 Suite: pci 00:04:05.555 Test: pci_hook ...[2024-11-20 07:02:27.652981] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3275018 has claimed it 00:04:05.555 EAL: Cannot find device (10000:00:01.0) 00:04:05.555 EAL: Failed to attach device on primary process 00:04:05.555 passed 00:04:05.555 00:04:05.555 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.555 suites 1 1 n/a 0 0 00:04:05.555 tests 1 1 1 0 0 00:04:05.555 asserts 25 25 25 0 n/a 00:04:05.555 00:04:05.555 Elapsed time = 0.030 seconds 00:04:05.555 00:04:05.555 real 0m0.052s 00:04:05.555 user 0m0.015s 00:04:05.555 sys 0m0.037s 00:04:05.555 07:02:27 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:05.555 07:02:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:05.555 ************************************ 00:04:05.555 END TEST env_pci 00:04:05.555 ************************************ 00:04:05.555 07:02:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:05.555 07:02:27 env -- env/env.sh@15 -- # uname 00:04:05.555 07:02:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:05.555 07:02:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:05.555 07:02:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.555 07:02:27 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:05.555 07:02:27 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:05.555 07:02:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.555 ************************************ 00:04:05.555 START TEST env_dpdk_post_init 00:04:05.555 ************************************ 00:04:05.555 07:02:27 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.556 EAL: Detected CPU lcores: 128 00:04:05.556 EAL: Detected NUMA nodes: 2 00:04:05.556 EAL: Detected shared linkage of DPDK 00:04:05.556 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.816 EAL: Selected IOVA mode 'VA' 00:04:05.816 EAL: VFIO support initialized 00:04:05.816 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.816 EAL: Using IOMMU type 1 (Type 1) 00:04:05.816 EAL: Ignore mapping IO port bar(1) 00:04:06.076 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:06.076 EAL: Ignore mapping IO port bar(1) 00:04:06.336 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:06.336 EAL: Ignore mapping IO port bar(1) 00:04:06.336 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:06.596 EAL: Ignore mapping IO port bar(1) 00:04:06.596 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:06.857 EAL: Ignore mapping IO port bar(1) 00:04:06.857 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:07.118 EAL: Ignore mapping IO port bar(1) 00:04:07.118 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:07.118 EAL: Ignore mapping IO port bar(1) 00:04:07.378 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:07.378 EAL: Ignore mapping IO port bar(1) 00:04:07.638 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:07.904 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:07.904 EAL: Ignore mapping IO port bar(1) 00:04:07.904 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:08.168 EAL: Ignore mapping IO port bar(1) 00:04:08.168 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:08.429 EAL: Ignore mapping IO port bar(1) 00:04:08.429 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:08.690 EAL: Ignore mapping IO port bar(1) 00:04:08.690 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:08.690 EAL: Ignore mapping IO port bar(1) 00:04:08.950 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:08.950 EAL: Ignore mapping IO port bar(1) 00:04:09.212 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:09.212 EAL: Ignore mapping IO port bar(1) 00:04:09.498 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:09.498 EAL: Ignore mapping IO port bar(1) 00:04:09.498 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:09.498 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:09.498 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:09.758 Starting DPDK initialization... 00:04:09.758 Starting SPDK post initialization... 00:04:09.758 SPDK NVMe probe 00:04:09.758 Attaching to 0000:65:00.0 00:04:09.758 Attached to 0000:65:00.0 00:04:09.758 Cleaning up... 00:04:11.712 00:04:11.712 real 0m5.747s 00:04:11.712 user 0m0.098s 00:04:11.712 sys 0m0.206s 00:04:11.712 07:02:33 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.712 07:02:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:11.712 ************************************ 00:04:11.712 END TEST env_dpdk_post_init 00:04:11.712 ************************************ 00:04:11.712 07:02:33 env -- env/env.sh@26 -- # uname 00:04:11.712 07:02:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:11.712 07:02:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:11.712 07:02:33 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:11.712 07:02:33 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:11.712 07:02:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.712 ************************************ 00:04:11.712 START TEST env_mem_callbacks 00:04:11.713 ************************************ 00:04:11.713 07:02:33 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:11.713 EAL: Detected CPU lcores: 128 00:04:11.713 EAL: Detected NUMA nodes: 2 00:04:11.713 EAL: Detected shared linkage of DPDK 00:04:11.713 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:11.713 EAL: Selected IOVA mode 'VA' 00:04:11.713 EAL: VFIO support initialized 00:04:11.713 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:11.713 00:04:11.713 00:04:11.713 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.713 http://cunit.sourceforge.net/ 00:04:11.713 00:04:11.713 00:04:11.713 Suite: memory 00:04:11.713 Test: test ... 00:04:11.713 register 0x200000200000 2097152 00:04:11.713 malloc 3145728 00:04:11.713 register 0x200000400000 4194304 00:04:11.713 buf 0x200000500000 len 3145728 PASSED 00:04:11.713 malloc 64 00:04:11.713 buf 0x2000004fff40 len 64 PASSED 00:04:11.713 malloc 4194304 00:04:11.713 register 0x200000800000 6291456 00:04:11.713 buf 0x200000a00000 len 4194304 PASSED 00:04:11.713 free 0x200000500000 3145728 00:04:11.713 free 0x2000004fff40 64 00:04:11.713 unregister 0x200000400000 4194304 PASSED 00:04:11.713 free 0x200000a00000 4194304 00:04:11.713 unregister 0x200000800000 6291456 PASSED 00:04:11.713 malloc 8388608 00:04:11.713 register 0x200000400000 10485760 00:04:11.713 buf 0x200000600000 len 8388608 PASSED 00:04:11.713 free 0x200000600000 8388608 00:04:11.713 unregister 0x200000400000 10485760 PASSED 00:04:11.713 passed 00:04:11.713 00:04:11.713 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.713 suites 1 1 n/a 0 0 00:04:11.713 tests 1 1 1 0 0 00:04:11.713 asserts 15 15 15 0 n/a 00:04:11.713 00:04:11.713 Elapsed time = 0.010 seconds 00:04:11.713 00:04:11.713 real 0m0.069s 00:04:11.713 user 0m0.023s 00:04:11.713 sys 0m0.045s 00:04:11.713 07:02:33 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.713 07:02:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:11.713 ************************************ 00:04:11.713 END TEST env_mem_callbacks 00:04:11.713 ************************************ 00:04:11.713 00:04:11.713 real 0m7.540s 00:04:11.713 user 0m1.049s 00:04:11.713 sys 0m1.049s 00:04:11.713 07:02:33 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.713 07:02:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.713 ************************************ 00:04:11.713 END TEST env 00:04:11.713 ************************************ 00:04:11.713 07:02:33 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:11.713 07:02:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:11.713 07:02:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:11.713 07:02:33 -- common/autotest_common.sh@10 -- # set +x 00:04:11.713 ************************************ 00:04:11.713 START TEST rpc 00:04:11.713 ************************************ 00:04:11.713 07:02:33 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:11.713 * Looking for test storage... 00:04:11.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:11.713 07:02:33 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:11.713 07:02:33 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:11.713 07:02:33 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:11.713 07:02:33 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:11.713 07:02:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.713 07:02:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.713 07:02:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.713 07:02:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.713 07:02:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.713 07:02:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.713 07:02:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.713 07:02:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.713 07:02:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.713 07:02:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.713 07:02:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.713 07:02:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:11.713 07:02:33 rpc -- scripts/common.sh@345 -- # : 1 00:04:11.713 07:02:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.713 07:02:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.713 07:02:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:11.973 07:02:33 rpc -- scripts/common.sh@353 -- # local d=1 00:04:11.973 07:02:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.973 07:02:33 rpc -- scripts/common.sh@355 -- # echo 1 00:04:11.973 07:02:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.973 07:02:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:11.973 07:02:33 rpc -- scripts/common.sh@353 -- # local d=2 00:04:11.973 07:02:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.973 07:02:33 rpc -- scripts/common.sh@355 -- # echo 2 00:04:11.973 07:02:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.973 07:02:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.973 07:02:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.973 07:02:33 rpc -- scripts/common.sh@368 -- # return 0 00:04:11.973 07:02:33 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.973 07:02:33 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:11.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.973 --rc genhtml_branch_coverage=1 00:04:11.973 --rc genhtml_function_coverage=1 00:04:11.973 --rc genhtml_legend=1 00:04:11.973 --rc geninfo_all_blocks=1 00:04:11.973 --rc geninfo_unexecuted_blocks=1 00:04:11.973 00:04:11.973 ' 00:04:11.973 07:02:33 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:11.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.973 --rc genhtml_branch_coverage=1 00:04:11.973 --rc genhtml_function_coverage=1 00:04:11.973 --rc genhtml_legend=1 00:04:11.973 --rc geninfo_all_blocks=1 00:04:11.973 --rc geninfo_unexecuted_blocks=1 00:04:11.973 00:04:11.973 ' 00:04:11.973 07:02:33 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:11.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.973 --rc genhtml_branch_coverage=1 00:04:11.973 --rc genhtml_function_coverage=1 00:04:11.973 --rc genhtml_legend=1 00:04:11.973 --rc geninfo_all_blocks=1 00:04:11.973 --rc geninfo_unexecuted_blocks=1 00:04:11.973 00:04:11.973 ' 00:04:11.973 07:02:33 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:11.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.973 --rc genhtml_branch_coverage=1 00:04:11.973 --rc genhtml_function_coverage=1 00:04:11.973 --rc genhtml_legend=1 00:04:11.973 --rc geninfo_all_blocks=1 00:04:11.974 --rc geninfo_unexecuted_blocks=1 00:04:11.974 00:04:11.974 ' 00:04:11.974 07:02:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3276307 00:04:11.974 07:02:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.974 07:02:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3276307 00:04:11.974 07:02:33 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:11.974 07:02:33 rpc -- common/autotest_common.sh@833 -- # '[' -z 3276307 ']' 00:04:11.974 07:02:33 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.974 07:02:33 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:11.974 07:02:34 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.974 07:02:34 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:11.974 07:02:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.974 [2024-11-20 07:02:34.069488] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:11.974 [2024-11-20 07:02:34.069564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276307 ] 00:04:11.974 [2024-11-20 07:02:34.163859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.974 [2024-11-20 07:02:34.215938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:11.974 [2024-11-20 07:02:34.215996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3276307' to capture a snapshot of events at runtime. 00:04:11.974 [2024-11-20 07:02:34.216005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:11.974 [2024-11-20 07:02:34.216013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:11.974 [2024-11-20 07:02:34.216019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3276307 for offline analysis/debug. 00:04:11.974 [2024-11-20 07:02:34.216830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.915 07:02:34 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:12.915 07:02:34 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:12.915 07:02:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:12.915 07:02:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:12.915 07:02:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:12.915 07:02:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:12.915 07:02:34 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:12.915 07:02:34 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.915 07:02:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.915 ************************************ 00:04:12.915 START TEST rpc_integrity 00:04:12.915 ************************************ 00:04:12.915 07:02:34 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:12.915 07:02:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:12.915 07:02:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.915 07:02:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.915 07:02:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.915 07:02:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:12.915 07:02:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:12.915 07:02:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:12.915 07:02:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:12.915 07:02:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.915 07:02:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.915 07:02:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.915 07:02:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:12.915 07:02:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:12.915 07:02:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.915 07:02:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.915 07:02:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.915 07:02:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:12.915 { 00:04:12.915 "name": "Malloc0", 00:04:12.915 "aliases": [ 00:04:12.915 "e4774476-389d-4046-a827-4c06080bd122" 00:04:12.915 ], 00:04:12.915 "product_name": "Malloc disk", 00:04:12.915 "block_size": 512, 00:04:12.915 "num_blocks": 16384, 00:04:12.915 "uuid": "e4774476-389d-4046-a827-4c06080bd122", 00:04:12.915 "assigned_rate_limits": { 00:04:12.915 "rw_ios_per_sec": 0, 00:04:12.915 "rw_mbytes_per_sec": 0, 00:04:12.915 "r_mbytes_per_sec": 0, 00:04:12.915 "w_mbytes_per_sec": 0 00:04:12.915 }, 00:04:12.915 "claimed": false, 00:04:12.915 "zoned": false, 00:04:12.915 "supported_io_types": { 00:04:12.915 "read": true, 00:04:12.915 "write": true, 00:04:12.915 "unmap": true, 00:04:12.915 "flush": true, 00:04:12.915 "reset": true, 00:04:12.915 "nvme_admin": false, 00:04:12.915 "nvme_io": false, 00:04:12.915 "nvme_io_md": false, 00:04:12.915 "write_zeroes": true, 00:04:12.915 "zcopy": true, 00:04:12.915 "get_zone_info": false, 00:04:12.915 "zone_management": false, 00:04:12.915 "zone_append": false, 00:04:12.915 "compare": false, 00:04:12.915 "compare_and_write": false, 00:04:12.915 "abort": true, 00:04:12.915 "seek_hole": false, 00:04:12.915 "seek_data": false, 00:04:12.915 "copy": true, 00:04:12.915 "nvme_iov_md": false 00:04:12.915 }, 00:04:12.915 "memory_domains": [ 00:04:12.915 { 00:04:12.915 "dma_device_id": "system", 00:04:12.915 "dma_device_type": 1 00:04:12.915 }, 00:04:12.915 { 00:04:12.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.915 "dma_device_type": 2 00:04:12.915 } 00:04:12.915 ], 00:04:12.915 "driver_specific": {} 00:04:12.915 } 00:04:12.915 ]' 00:04:12.915 07:02:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:12.915 07:02:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:12.915 07:02:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:12.915 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.915 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.915 [2024-11-20 07:02:35.055745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:12.915 [2024-11-20 07:02:35.055793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:12.915 [2024-11-20 07:02:35.055810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1459800 00:04:12.915 [2024-11-20 07:02:35.055819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:12.915 [2024-11-20 07:02:35.057389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:12.915 [2024-11-20 07:02:35.057439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:12.915 Passthru0 00:04:12.915 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.915 07:02:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:12.915 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.915 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.915 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.915 07:02:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:12.915 { 00:04:12.915 "name": "Malloc0", 00:04:12.915 "aliases": [ 00:04:12.915 "e4774476-389d-4046-a827-4c06080bd122" 00:04:12.915 ], 00:04:12.915 "product_name": "Malloc disk", 00:04:12.916 "block_size": 512, 00:04:12.916 "num_blocks": 16384, 00:04:12.916 "uuid": "e4774476-389d-4046-a827-4c06080bd122", 00:04:12.916 "assigned_rate_limits": { 00:04:12.916 "rw_ios_per_sec": 0, 00:04:12.916 "rw_mbytes_per_sec": 0, 00:04:12.916 "r_mbytes_per_sec": 0, 00:04:12.916 "w_mbytes_per_sec": 0 00:04:12.916 }, 00:04:12.916 "claimed": true, 00:04:12.916 "claim_type": "exclusive_write", 00:04:12.916 "zoned": false, 00:04:12.916 "supported_io_types": { 00:04:12.916 "read": true, 00:04:12.916 "write": true, 00:04:12.916 "unmap": true, 00:04:12.916 "flush": true, 00:04:12.916 "reset": true, 00:04:12.916 "nvme_admin": false, 00:04:12.916 "nvme_io": false, 00:04:12.916 "nvme_io_md": false, 00:04:12.916 "write_zeroes": true, 00:04:12.916 "zcopy": true, 00:04:12.916 "get_zone_info": false, 00:04:12.916 "zone_management": false, 00:04:12.916 "zone_append": false, 00:04:12.916 "compare": false, 00:04:12.916 "compare_and_write": false, 00:04:12.916 "abort": true, 00:04:12.916 "seek_hole": false, 00:04:12.916 "seek_data": false, 00:04:12.916 "copy": true, 00:04:12.916 "nvme_iov_md": false 00:04:12.916 }, 00:04:12.916 "memory_domains": [ 00:04:12.916 { 00:04:12.916 "dma_device_id": "system", 00:04:12.916 "dma_device_type": 1 00:04:12.916 }, 00:04:12.916 { 00:04:12.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.916 "dma_device_type": 2 00:04:12.916 } 00:04:12.916 ], 00:04:12.916 "driver_specific": {} 00:04:12.916 }, 00:04:12.916 { 00:04:12.916 "name": "Passthru0", 00:04:12.916 "aliases": [ 00:04:12.916 "1352a8b9-16cc-5ec3-9c48-eb1587eab17c" 00:04:12.916 ], 00:04:12.916 "product_name": "passthru", 00:04:12.916 "block_size": 512, 00:04:12.916 "num_blocks": 16384, 00:04:12.916 "uuid": "1352a8b9-16cc-5ec3-9c48-eb1587eab17c", 00:04:12.916 "assigned_rate_limits": { 00:04:12.916 "rw_ios_per_sec": 0, 00:04:12.916 "rw_mbytes_per_sec": 0, 00:04:12.916 "r_mbytes_per_sec": 0, 00:04:12.916 "w_mbytes_per_sec": 0 00:04:12.916 }, 00:04:12.916 "claimed": false, 00:04:12.916 "zoned": false, 00:04:12.916 "supported_io_types": { 00:04:12.916 "read": true, 00:04:12.916 "write": true, 00:04:12.916 "unmap": true, 00:04:12.916 "flush": true, 00:04:12.916 "reset": true, 00:04:12.916 "nvme_admin": false, 00:04:12.916 "nvme_io": false, 00:04:12.916 "nvme_io_md": false, 00:04:12.916 "write_zeroes": true, 00:04:12.916 "zcopy": true, 00:04:12.916 "get_zone_info": false, 00:04:12.916 "zone_management": false, 00:04:12.916 "zone_append": false, 00:04:12.916 "compare": false, 00:04:12.916 "compare_and_write": false, 00:04:12.916 "abort": true, 00:04:12.916 "seek_hole": false, 00:04:12.916 "seek_data": false, 00:04:12.916 "copy": true, 00:04:12.916 "nvme_iov_md": false 00:04:12.916 }, 00:04:12.916 "memory_domains": [ 00:04:12.916 { 00:04:12.916 "dma_device_id": "system", 00:04:12.916 "dma_device_type": 1 00:04:12.916 }, 00:04:12.916 { 00:04:12.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.916 "dma_device_type": 2 00:04:12.916 } 00:04:12.916 ], 00:04:12.916 "driver_specific": { 00:04:12.916 "passthru": { 00:04:12.916 "name": "Passthru0", 00:04:12.916 "base_bdev_name": "Malloc0" 00:04:12.916 } 00:04:12.916 } 00:04:12.916 } 00:04:12.916 ]' 00:04:12.916 07:02:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:12.916 07:02:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:12.916 07:02:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:12.916 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.916 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.916 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.916 07:02:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:12.916 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.916 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.916 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.916 07:02:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:12.916 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.916 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.916 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.916 07:02:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:12.916 07:02:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:13.177 07:02:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:13.177 00:04:13.177 real 0m0.304s 00:04:13.177 user 0m0.179s 00:04:13.177 sys 0m0.052s 00:04:13.177 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.177 07:02:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.177 ************************************ 00:04:13.177 END TEST rpc_integrity 00:04:13.177 ************************************ 00:04:13.177 07:02:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:13.177 07:02:35 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.177 07:02:35 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.177 07:02:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.177 ************************************ 00:04:13.177 START TEST rpc_plugins 00:04:13.177 ************************************ 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:13.177 07:02:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.177 07:02:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:13.177 07:02:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.177 07:02:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:13.177 { 00:04:13.177 "name": "Malloc1", 00:04:13.177 "aliases": [ 00:04:13.177 "d6cfbfda-a21f-4c07-bdba-80c69bbfb1ed" 00:04:13.177 ], 00:04:13.177 "product_name": "Malloc disk", 00:04:13.177 "block_size": 4096, 00:04:13.177 "num_blocks": 256, 00:04:13.177 "uuid": "d6cfbfda-a21f-4c07-bdba-80c69bbfb1ed", 00:04:13.177 "assigned_rate_limits": { 00:04:13.177 "rw_ios_per_sec": 0, 00:04:13.177 "rw_mbytes_per_sec": 0, 00:04:13.177 "r_mbytes_per_sec": 0, 00:04:13.177 "w_mbytes_per_sec": 0 00:04:13.177 }, 00:04:13.177 "claimed": false, 00:04:13.177 "zoned": false, 00:04:13.177 "supported_io_types": { 00:04:13.177 "read": true, 00:04:13.177 "write": true, 00:04:13.177 "unmap": true, 00:04:13.177 "flush": true, 00:04:13.177 "reset": true, 00:04:13.177 "nvme_admin": false, 00:04:13.177 "nvme_io": false, 00:04:13.177 "nvme_io_md": false, 00:04:13.177 "write_zeroes": true, 00:04:13.177 "zcopy": true, 00:04:13.177 "get_zone_info": false, 00:04:13.177 "zone_management": false, 00:04:13.177 "zone_append": false, 00:04:13.177 "compare": false, 00:04:13.177 "compare_and_write": false, 00:04:13.177 "abort": true, 00:04:13.177 "seek_hole": false, 00:04:13.177 "seek_data": false, 00:04:13.177 "copy": true, 00:04:13.177 "nvme_iov_md": false 00:04:13.177 }, 00:04:13.177 "memory_domains": [ 00:04:13.177 { 00:04:13.177 "dma_device_id": "system", 00:04:13.177 "dma_device_type": 1 00:04:13.177 }, 00:04:13.177 { 00:04:13.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.177 "dma_device_type": 2 00:04:13.177 } 00:04:13.177 ], 00:04:13.177 "driver_specific": {} 00:04:13.177 } 00:04:13.177 ]' 00:04:13.177 07:02:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:13.177 07:02:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:13.177 07:02:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.177 07:02:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.177 07:02:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:13.177 07:02:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:13.177 07:02:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:13.177 00:04:13.177 real 0m0.155s 00:04:13.177 user 0m0.100s 00:04:13.177 sys 0m0.016s 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.177 07:02:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.177 ************************************ 00:04:13.177 END TEST rpc_plugins 00:04:13.177 ************************************ 00:04:13.438 07:02:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:13.438 07:02:35 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.438 07:02:35 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.438 07:02:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.438 ************************************ 00:04:13.438 START TEST rpc_trace_cmd_test 00:04:13.438 ************************************ 00:04:13.438 07:02:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:13.438 07:02:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:13.438 07:02:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:13.438 07:02:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.438 07:02:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:13.438 07:02:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.438 07:02:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:13.438 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3276307", 00:04:13.438 "tpoint_group_mask": "0x8", 00:04:13.438 "iscsi_conn": { 00:04:13.438 "mask": "0x2", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "scsi": { 00:04:13.438 "mask": "0x4", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "bdev": { 00:04:13.438 "mask": "0x8", 00:04:13.438 "tpoint_mask": "0xffffffffffffffff" 00:04:13.438 }, 00:04:13.438 "nvmf_rdma": { 00:04:13.438 "mask": "0x10", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "nvmf_tcp": { 00:04:13.438 "mask": "0x20", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "ftl": { 00:04:13.438 "mask": "0x40", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "blobfs": { 00:04:13.438 "mask": "0x80", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "dsa": { 00:04:13.438 "mask": "0x200", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "thread": { 00:04:13.438 "mask": "0x400", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "nvme_pcie": { 00:04:13.438 "mask": "0x800", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "iaa": { 00:04:13.438 "mask": "0x1000", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "nvme_tcp": { 00:04:13.438 "mask": "0x2000", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "bdev_nvme": { 00:04:13.438 "mask": "0x4000", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "sock": { 00:04:13.438 "mask": "0x8000", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "blob": { 00:04:13.438 "mask": "0x10000", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "bdev_raid": { 00:04:13.438 "mask": "0x20000", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 }, 00:04:13.438 "scheduler": { 00:04:13.438 "mask": "0x40000", 00:04:13.438 "tpoint_mask": "0x0" 00:04:13.438 } 00:04:13.438 }' 00:04:13.438 07:02:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:13.438 07:02:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:13.438 07:02:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:13.438 07:02:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:13.438 07:02:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:13.438 07:02:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:13.438 07:02:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:13.699 07:02:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:13.699 07:02:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:13.699 07:02:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:13.699 00:04:13.699 real 0m0.230s 00:04:13.699 user 0m0.187s 00:04:13.699 sys 0m0.035s 00:04:13.699 07:02:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.699 07:02:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:13.699 ************************************ 00:04:13.699 END TEST rpc_trace_cmd_test 00:04:13.699 ************************************ 00:04:13.699 07:02:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:13.699 07:02:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:13.699 07:02:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:13.699 07:02:35 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.699 07:02:35 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.699 07:02:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.699 ************************************ 00:04:13.699 START TEST rpc_daemon_integrity 00:04:13.699 ************************************ 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.699 { 00:04:13.699 "name": "Malloc2", 00:04:13.699 "aliases": [ 00:04:13.699 "937beac6-7466-43b9-a68d-6f13386b83f8" 00:04:13.699 ], 00:04:13.699 "product_name": "Malloc disk", 00:04:13.699 "block_size": 512, 00:04:13.699 "num_blocks": 16384, 00:04:13.699 "uuid": "937beac6-7466-43b9-a68d-6f13386b83f8", 00:04:13.699 "assigned_rate_limits": { 00:04:13.699 "rw_ios_per_sec": 0, 00:04:13.699 "rw_mbytes_per_sec": 0, 00:04:13.699 "r_mbytes_per_sec": 0, 00:04:13.699 "w_mbytes_per_sec": 0 00:04:13.699 }, 00:04:13.699 "claimed": false, 00:04:13.699 "zoned": false, 00:04:13.699 "supported_io_types": { 00:04:13.699 "read": true, 00:04:13.699 "write": true, 00:04:13.699 "unmap": true, 00:04:13.699 "flush": true, 00:04:13.699 "reset": true, 00:04:13.699 "nvme_admin": false, 00:04:13.699 "nvme_io": false, 00:04:13.699 "nvme_io_md": false, 00:04:13.699 "write_zeroes": true, 00:04:13.699 "zcopy": true, 00:04:13.699 "get_zone_info": false, 00:04:13.699 "zone_management": false, 00:04:13.699 "zone_append": false, 00:04:13.699 "compare": false, 00:04:13.699 "compare_and_write": false, 00:04:13.699 "abort": true, 00:04:13.699 "seek_hole": false, 00:04:13.699 "seek_data": false, 00:04:13.699 "copy": true, 00:04:13.699 "nvme_iov_md": false 00:04:13.699 }, 00:04:13.699 "memory_domains": [ 00:04:13.699 { 00:04:13.699 "dma_device_id": "system", 00:04:13.699 "dma_device_type": 1 00:04:13.699 }, 00:04:13.699 { 00:04:13.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.699 "dma_device_type": 2 00:04:13.699 } 00:04:13.699 ], 00:04:13.699 "driver_specific": {} 00:04:13.699 } 00:04:13.699 ]' 00:04:13.699 07:02:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:13.960 07:02:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:13.960 07:02:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:13.960 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.960 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.960 [2024-11-20 07:02:35.990302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:13.960 [2024-11-20 07:02:35.990342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:13.960 [2024-11-20 07:02:35.990360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1316920 00:04:13.960 [2024-11-20 07:02:35.990368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:13.960 [2024-11-20 07:02:35.991886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:13.961 [2024-11-20 07:02:35.991923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:13.961 Passthru0 00:04:13.961 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.961 07:02:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:13.961 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.961 07:02:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:13.961 { 00:04:13.961 "name": "Malloc2", 00:04:13.961 "aliases": [ 00:04:13.961 "937beac6-7466-43b9-a68d-6f13386b83f8" 00:04:13.961 ], 00:04:13.961 "product_name": "Malloc disk", 00:04:13.961 "block_size": 512, 00:04:13.961 "num_blocks": 16384, 00:04:13.961 "uuid": "937beac6-7466-43b9-a68d-6f13386b83f8", 00:04:13.961 "assigned_rate_limits": { 00:04:13.961 "rw_ios_per_sec": 0, 00:04:13.961 "rw_mbytes_per_sec": 0, 00:04:13.961 "r_mbytes_per_sec": 0, 00:04:13.961 "w_mbytes_per_sec": 0 00:04:13.961 }, 00:04:13.961 "claimed": true, 00:04:13.961 "claim_type": "exclusive_write", 00:04:13.961 "zoned": false, 00:04:13.961 "supported_io_types": { 00:04:13.961 "read": true, 00:04:13.961 "write": true, 00:04:13.961 "unmap": true, 00:04:13.961 "flush": true, 00:04:13.961 "reset": true, 00:04:13.961 "nvme_admin": false, 00:04:13.961 "nvme_io": false, 00:04:13.961 "nvme_io_md": false, 00:04:13.961 "write_zeroes": true, 00:04:13.961 "zcopy": true, 00:04:13.961 "get_zone_info": false, 00:04:13.961 "zone_management": false, 00:04:13.961 "zone_append": false, 00:04:13.961 "compare": false, 00:04:13.961 "compare_and_write": false, 00:04:13.961 "abort": true, 00:04:13.961 "seek_hole": false, 00:04:13.961 "seek_data": false, 00:04:13.961 "copy": true, 00:04:13.961 "nvme_iov_md": false 00:04:13.961 }, 00:04:13.961 "memory_domains": [ 00:04:13.961 { 00:04:13.961 "dma_device_id": "system", 00:04:13.961 "dma_device_type": 1 00:04:13.961 }, 00:04:13.961 { 00:04:13.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.961 "dma_device_type": 2 00:04:13.961 } 00:04:13.961 ], 00:04:13.961 "driver_specific": {} 00:04:13.961 }, 00:04:13.961 { 00:04:13.961 "name": "Passthru0", 00:04:13.961 "aliases": [ 00:04:13.961 "4cbd3f4d-bbd7-5ae9-b094-d4aa3c810e71" 00:04:13.961 ], 00:04:13.961 "product_name": "passthru", 00:04:13.961 "block_size": 512, 00:04:13.961 "num_blocks": 16384, 00:04:13.961 "uuid": "4cbd3f4d-bbd7-5ae9-b094-d4aa3c810e71", 00:04:13.961 "assigned_rate_limits": { 00:04:13.961 "rw_ios_per_sec": 0, 00:04:13.961 "rw_mbytes_per_sec": 0, 00:04:13.961 "r_mbytes_per_sec": 0, 00:04:13.961 "w_mbytes_per_sec": 0 00:04:13.961 }, 00:04:13.961 "claimed": false, 00:04:13.961 "zoned": false, 00:04:13.961 "supported_io_types": { 00:04:13.961 "read": true, 00:04:13.961 "write": true, 00:04:13.961 "unmap": true, 00:04:13.961 "flush": true, 00:04:13.961 "reset": true, 00:04:13.961 "nvme_admin": false, 00:04:13.961 "nvme_io": false, 00:04:13.961 "nvme_io_md": false, 00:04:13.961 "write_zeroes": true, 00:04:13.961 "zcopy": true, 00:04:13.961 "get_zone_info": false, 00:04:13.961 "zone_management": false, 00:04:13.961 "zone_append": false, 00:04:13.961 "compare": false, 00:04:13.961 "compare_and_write": false, 00:04:13.961 "abort": true, 00:04:13.961 "seek_hole": false, 00:04:13.961 "seek_data": false, 00:04:13.961 "copy": true, 00:04:13.961 "nvme_iov_md": false 00:04:13.961 }, 00:04:13.961 "memory_domains": [ 00:04:13.961 { 00:04:13.961 "dma_device_id": "system", 00:04:13.961 "dma_device_type": 1 00:04:13.961 }, 00:04:13.961 { 00:04:13.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.961 "dma_device_type": 2 00:04:13.961 } 00:04:13.961 ], 00:04:13.961 "driver_specific": { 00:04:13.961 "passthru": { 00:04:13.961 "name": "Passthru0", 00:04:13.961 "base_bdev_name": "Malloc2" 00:04:13.961 } 00:04:13.961 } 00:04:13.961 } 00:04:13.961 ]' 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:13.961 00:04:13.961 real 0m0.305s 00:04:13.961 user 0m0.191s 00:04:13.961 sys 0m0.046s 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.961 07:02:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.961 ************************************ 00:04:13.961 END TEST rpc_daemon_integrity 00:04:13.961 ************************************ 00:04:13.961 07:02:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:13.961 07:02:36 rpc -- rpc/rpc.sh@84 -- # killprocess 3276307 00:04:13.961 07:02:36 rpc -- common/autotest_common.sh@952 -- # '[' -z 3276307 ']' 00:04:13.961 07:02:36 rpc -- common/autotest_common.sh@956 -- # kill -0 3276307 00:04:13.961 07:02:36 rpc -- common/autotest_common.sh@957 -- # uname 00:04:13.961 07:02:36 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:13.961 07:02:36 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3276307 00:04:14.221 07:02:36 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:14.221 07:02:36 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:14.221 07:02:36 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3276307' 00:04:14.221 killing process with pid 3276307 00:04:14.221 07:02:36 rpc -- common/autotest_common.sh@971 -- # kill 3276307 00:04:14.221 07:02:36 rpc -- common/autotest_common.sh@976 -- # wait 3276307 00:04:14.481 00:04:14.481 real 0m2.698s 00:04:14.481 user 0m3.410s 00:04:14.481 sys 0m0.851s 00:04:14.481 07:02:36 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.481 07:02:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.481 ************************************ 00:04:14.481 END TEST rpc 00:04:14.481 ************************************ 00:04:14.481 07:02:36 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:14.481 07:02:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.481 07:02:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.481 07:02:36 -- common/autotest_common.sh@10 -- # set +x 00:04:14.481 ************************************ 00:04:14.481 START TEST skip_rpc 00:04:14.481 ************************************ 00:04:14.481 07:02:36 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:14.481 * Looking for test storage... 00:04:14.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.481 07:02:36 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:14.481 07:02:36 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:14.481 07:02:36 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:14.741 07:02:36 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:14.741 07:02:36 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.741 07:02:36 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.741 07:02:36 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.741 07:02:36 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.741 07:02:36 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.741 07:02:36 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.741 07:02:36 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.741 07:02:36 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.741 07:02:36 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.741 07:02:36 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.741 07:02:36 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.741 07:02:36 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:14.741 07:02:36 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:14.741 07:02:36 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.742 07:02:36 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.742 07:02:36 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:14.742 07:02:36 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:14.742 07:02:36 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.742 07:02:36 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:14.742 07:02:36 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.742 07:02:36 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:14.742 07:02:36 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:14.742 07:02:36 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.742 07:02:36 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:14.742 07:02:36 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.742 07:02:36 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.742 07:02:36 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.742 07:02:36 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:14.742 07:02:36 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.742 07:02:36 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:14.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.742 --rc genhtml_branch_coverage=1 00:04:14.742 --rc genhtml_function_coverage=1 00:04:14.742 --rc genhtml_legend=1 00:04:14.742 --rc geninfo_all_blocks=1 00:04:14.742 --rc geninfo_unexecuted_blocks=1 00:04:14.742 00:04:14.742 ' 00:04:14.742 07:02:36 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:14.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.742 --rc genhtml_branch_coverage=1 00:04:14.742 --rc genhtml_function_coverage=1 00:04:14.742 --rc genhtml_legend=1 00:04:14.742 --rc geninfo_all_blocks=1 00:04:14.742 --rc geninfo_unexecuted_blocks=1 00:04:14.742 00:04:14.742 ' 00:04:14.742 07:02:36 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:14.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.742 --rc genhtml_branch_coverage=1 00:04:14.742 --rc genhtml_function_coverage=1 00:04:14.742 --rc genhtml_legend=1 00:04:14.742 --rc geninfo_all_blocks=1 00:04:14.742 --rc geninfo_unexecuted_blocks=1 00:04:14.742 00:04:14.742 ' 00:04:14.742 07:02:36 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:14.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.742 --rc genhtml_branch_coverage=1 00:04:14.742 --rc genhtml_function_coverage=1 00:04:14.742 --rc genhtml_legend=1 00:04:14.742 --rc geninfo_all_blocks=1 00:04:14.742 --rc geninfo_unexecuted_blocks=1 00:04:14.742 00:04:14.742 ' 00:04:14.742 07:02:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.742 07:02:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:14.742 07:02:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:14.742 07:02:36 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.742 07:02:36 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.742 07:02:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.742 ************************************ 00:04:14.742 START TEST skip_rpc 00:04:14.742 ************************************ 00:04:14.742 07:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:14.742 07:02:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3277154 00:04:14.742 07:02:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.742 07:02:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:14.742 07:02:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:14.742 [2024-11-20 07:02:36.886864] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:14.742 [2024-11-20 07:02:36.886929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277154 ] 00:04:14.742 [2024-11-20 07:02:36.979409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.002 [2024-11-20 07:02:37.033980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3277154 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 3277154 ']' 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 3277154 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3277154 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3277154' 00:04:20.293 killing process with pid 3277154 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 3277154 00:04:20.293 07:02:41 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 3277154 00:04:20.293 00:04:20.293 real 0m5.264s 00:04:20.293 user 0m5.022s 00:04:20.293 sys 0m0.290s 00:04:20.293 07:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:20.293 07:02:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.293 ************************************ 00:04:20.293 END TEST skip_rpc 00:04:20.293 ************************************ 00:04:20.293 07:02:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:20.293 07:02:42 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:20.293 07:02:42 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:20.293 07:02:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.293 ************************************ 00:04:20.293 START TEST skip_rpc_with_json 00:04:20.293 ************************************ 00:04:20.293 07:02:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:20.293 07:02:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:20.293 07:02:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3278193 00:04:20.293 07:02:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.293 07:02:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3278193 00:04:20.293 07:02:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.293 07:02:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 3278193 ']' 00:04:20.293 07:02:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.293 07:02:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:20.293 07:02:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.293 07:02:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:20.293 07:02:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.293 [2024-11-20 07:02:42.211331] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:20.293 [2024-11-20 07:02:42.211379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278193 ] 00:04:20.293 [2024-11-20 07:02:42.293505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.293 [2024-11-20 07:02:42.324743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.864 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:20.864 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:20.864 07:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:20.864 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.864 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.864 [2024-11-20 07:02:43.015180] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:20.864 request: 00:04:20.864 { 00:04:20.864 "trtype": "tcp", 00:04:20.864 "method": "nvmf_get_transports", 00:04:20.864 "req_id": 1 00:04:20.864 } 00:04:20.864 Got JSON-RPC error response 00:04:20.864 response: 00:04:20.864 { 00:04:20.864 "code": -19, 00:04:20.864 "message": "No such device" 00:04:20.864 } 00:04:20.864 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:20.864 07:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:20.864 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.864 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.864 [2024-11-20 07:02:43.027264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.864 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.864 07:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:20.864 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.864 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.125 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:21.125 07:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:21.125 { 00:04:21.125 "subsystems": [ 00:04:21.125 { 00:04:21.125 "subsystem": "fsdev", 00:04:21.125 "config": [ 00:04:21.125 { 00:04:21.125 "method": "fsdev_set_opts", 00:04:21.125 "params": { 00:04:21.125 "fsdev_io_pool_size": 65535, 00:04:21.125 "fsdev_io_cache_size": 256 00:04:21.125 } 00:04:21.125 } 00:04:21.125 ] 00:04:21.125 }, 00:04:21.125 { 00:04:21.125 "subsystem": "vfio_user_target", 00:04:21.125 "config": null 00:04:21.125 }, 00:04:21.125 { 00:04:21.125 "subsystem": "keyring", 00:04:21.125 "config": [] 00:04:21.125 }, 00:04:21.125 { 00:04:21.125 "subsystem": "iobuf", 00:04:21.125 "config": [ 00:04:21.125 { 00:04:21.125 "method": "iobuf_set_options", 00:04:21.125 "params": { 00:04:21.125 "small_pool_count": 8192, 00:04:21.125 "large_pool_count": 1024, 00:04:21.125 "small_bufsize": 8192, 00:04:21.125 "large_bufsize": 135168, 00:04:21.125 "enable_numa": false 00:04:21.125 } 00:04:21.125 } 00:04:21.125 ] 00:04:21.125 }, 00:04:21.125 { 00:04:21.125 "subsystem": "sock", 00:04:21.125 "config": [ 00:04:21.125 { 00:04:21.125 "method": "sock_set_default_impl", 00:04:21.125 "params": { 00:04:21.125 "impl_name": "posix" 00:04:21.125 } 00:04:21.125 }, 00:04:21.125 { 00:04:21.125 "method": "sock_impl_set_options", 00:04:21.125 "params": { 00:04:21.125 "impl_name": "ssl", 00:04:21.125 "recv_buf_size": 4096, 00:04:21.125 "send_buf_size": 4096, 00:04:21.125 "enable_recv_pipe": true, 00:04:21.125 "enable_quickack": false, 00:04:21.125 "enable_placement_id": 0, 00:04:21.125 "enable_zerocopy_send_server": true, 00:04:21.125 "enable_zerocopy_send_client": false, 00:04:21.125 "zerocopy_threshold": 0, 00:04:21.125 "tls_version": 0, 00:04:21.125 "enable_ktls": false 00:04:21.125 } 00:04:21.125 }, 00:04:21.125 { 00:04:21.125 "method": "sock_impl_set_options", 00:04:21.125 "params": { 00:04:21.125 "impl_name": "posix", 00:04:21.125 "recv_buf_size": 2097152, 00:04:21.125 "send_buf_size": 2097152, 00:04:21.125 "enable_recv_pipe": true, 00:04:21.125 "enable_quickack": false, 00:04:21.125 "enable_placement_id": 0, 00:04:21.125 "enable_zerocopy_send_server": true, 00:04:21.125 "enable_zerocopy_send_client": false, 00:04:21.125 "zerocopy_threshold": 0, 00:04:21.125 "tls_version": 0, 00:04:21.125 "enable_ktls": false 00:04:21.125 } 00:04:21.125 } 00:04:21.125 ] 00:04:21.125 }, 00:04:21.125 { 00:04:21.125 "subsystem": "vmd", 00:04:21.125 "config": [] 00:04:21.125 }, 00:04:21.125 { 00:04:21.125 "subsystem": "accel", 00:04:21.125 "config": [ 00:04:21.125 { 00:04:21.125 "method": "accel_set_options", 00:04:21.125 "params": { 00:04:21.125 "small_cache_size": 128, 00:04:21.125 "large_cache_size": 16, 00:04:21.125 "task_count": 2048, 00:04:21.125 "sequence_count": 2048, 00:04:21.125 "buf_count": 2048 00:04:21.125 } 00:04:21.125 } 00:04:21.125 ] 00:04:21.125 }, 00:04:21.125 { 00:04:21.125 "subsystem": "bdev", 00:04:21.125 "config": [ 00:04:21.125 { 00:04:21.125 "method": "bdev_set_options", 00:04:21.125 "params": { 00:04:21.125 "bdev_io_pool_size": 65535, 00:04:21.125 "bdev_io_cache_size": 256, 00:04:21.125 "bdev_auto_examine": true, 00:04:21.125 "iobuf_small_cache_size": 128, 00:04:21.125 "iobuf_large_cache_size": 16 00:04:21.126 } 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "method": "bdev_raid_set_options", 00:04:21.126 "params": { 00:04:21.126 "process_window_size_kb": 1024, 00:04:21.126 "process_max_bandwidth_mb_sec": 0 00:04:21.126 } 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "method": "bdev_iscsi_set_options", 00:04:21.126 "params": { 00:04:21.126 "timeout_sec": 30 00:04:21.126 } 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "method": "bdev_nvme_set_options", 00:04:21.126 "params": { 00:04:21.126 "action_on_timeout": "none", 00:04:21.126 "timeout_us": 0, 00:04:21.126 "timeout_admin_us": 0, 00:04:21.126 "keep_alive_timeout_ms": 10000, 00:04:21.126 "arbitration_burst": 0, 00:04:21.126 "low_priority_weight": 0, 00:04:21.126 "medium_priority_weight": 0, 00:04:21.126 "high_priority_weight": 0, 00:04:21.126 "nvme_adminq_poll_period_us": 10000, 00:04:21.126 "nvme_ioq_poll_period_us": 0, 00:04:21.126 "io_queue_requests": 0, 00:04:21.126 "delay_cmd_submit": true, 00:04:21.126 "transport_retry_count": 4, 00:04:21.126 "bdev_retry_count": 3, 00:04:21.126 "transport_ack_timeout": 0, 00:04:21.126 "ctrlr_loss_timeout_sec": 0, 00:04:21.126 "reconnect_delay_sec": 0, 00:04:21.126 "fast_io_fail_timeout_sec": 0, 00:04:21.126 "disable_auto_failback": false, 00:04:21.126 "generate_uuids": false, 00:04:21.126 "transport_tos": 0, 00:04:21.126 "nvme_error_stat": false, 00:04:21.126 "rdma_srq_size": 0, 00:04:21.126 "io_path_stat": false, 00:04:21.126 "allow_accel_sequence": false, 00:04:21.126 "rdma_max_cq_size": 0, 00:04:21.126 "rdma_cm_event_timeout_ms": 0, 00:04:21.126 "dhchap_digests": [ 00:04:21.126 "sha256", 00:04:21.126 "sha384", 00:04:21.126 "sha512" 00:04:21.126 ], 00:04:21.126 "dhchap_dhgroups": [ 00:04:21.126 "null", 00:04:21.126 "ffdhe2048", 00:04:21.126 "ffdhe3072", 00:04:21.126 "ffdhe4096", 00:04:21.126 "ffdhe6144", 00:04:21.126 "ffdhe8192" 00:04:21.126 ] 00:04:21.126 } 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "method": "bdev_nvme_set_hotplug", 00:04:21.126 "params": { 00:04:21.126 "period_us": 100000, 00:04:21.126 "enable": false 00:04:21.126 } 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "method": "bdev_wait_for_examine" 00:04:21.126 } 00:04:21.126 ] 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "subsystem": "scsi", 00:04:21.126 "config": null 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "subsystem": "scheduler", 00:04:21.126 "config": [ 00:04:21.126 { 00:04:21.126 "method": "framework_set_scheduler", 00:04:21.126 "params": { 00:04:21.126 "name": "static" 00:04:21.126 } 00:04:21.126 } 00:04:21.126 ] 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "subsystem": "vhost_scsi", 00:04:21.126 "config": [] 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "subsystem": "vhost_blk", 00:04:21.126 "config": [] 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "subsystem": "ublk", 00:04:21.126 "config": [] 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "subsystem": "nbd", 00:04:21.126 "config": [] 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "subsystem": "nvmf", 00:04:21.126 "config": [ 00:04:21.126 { 00:04:21.126 "method": "nvmf_set_config", 00:04:21.126 "params": { 00:04:21.126 "discovery_filter": "match_any", 00:04:21.126 "admin_cmd_passthru": { 00:04:21.126 "identify_ctrlr": false 00:04:21.126 }, 00:04:21.126 "dhchap_digests": [ 00:04:21.126 "sha256", 00:04:21.126 "sha384", 00:04:21.126 "sha512" 00:04:21.126 ], 00:04:21.126 "dhchap_dhgroups": [ 00:04:21.126 "null", 00:04:21.126 "ffdhe2048", 00:04:21.126 "ffdhe3072", 00:04:21.126 "ffdhe4096", 00:04:21.126 "ffdhe6144", 00:04:21.126 "ffdhe8192" 00:04:21.126 ] 00:04:21.126 } 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "method": "nvmf_set_max_subsystems", 00:04:21.126 "params": { 00:04:21.126 "max_subsystems": 1024 00:04:21.126 } 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "method": "nvmf_set_crdt", 00:04:21.126 "params": { 00:04:21.126 "crdt1": 0, 00:04:21.126 "crdt2": 0, 00:04:21.126 "crdt3": 0 00:04:21.126 } 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "method": "nvmf_create_transport", 00:04:21.126 "params": { 00:04:21.126 "trtype": "TCP", 00:04:21.126 "max_queue_depth": 128, 00:04:21.126 "max_io_qpairs_per_ctrlr": 127, 00:04:21.126 "in_capsule_data_size": 4096, 00:04:21.126 "max_io_size": 131072, 00:04:21.126 "io_unit_size": 131072, 00:04:21.126 "max_aq_depth": 128, 00:04:21.126 "num_shared_buffers": 511, 00:04:21.126 "buf_cache_size": 4294967295, 00:04:21.126 "dif_insert_or_strip": false, 00:04:21.126 "zcopy": false, 00:04:21.126 "c2h_success": true, 00:04:21.126 "sock_priority": 0, 00:04:21.126 "abort_timeout_sec": 1, 00:04:21.126 "ack_timeout": 0, 00:04:21.126 "data_wr_pool_size": 0 00:04:21.126 } 00:04:21.126 } 00:04:21.126 ] 00:04:21.126 }, 00:04:21.126 { 00:04:21.126 "subsystem": "iscsi", 00:04:21.126 "config": [ 00:04:21.126 { 00:04:21.126 "method": "iscsi_set_options", 00:04:21.126 "params": { 00:04:21.126 "node_base": "iqn.2016-06.io.spdk", 00:04:21.126 "max_sessions": 128, 00:04:21.126 "max_connections_per_session": 2, 00:04:21.126 "max_queue_depth": 64, 00:04:21.126 "default_time2wait": 2, 00:04:21.126 "default_time2retain": 20, 00:04:21.126 "first_burst_length": 8192, 00:04:21.126 "immediate_data": true, 00:04:21.126 "allow_duplicated_isid": false, 00:04:21.126 "error_recovery_level": 0, 00:04:21.126 "nop_timeout": 60, 00:04:21.126 "nop_in_interval": 30, 00:04:21.126 "disable_chap": false, 00:04:21.126 "require_chap": false, 00:04:21.126 "mutual_chap": false, 00:04:21.126 "chap_group": 0, 00:04:21.126 "max_large_datain_per_connection": 64, 00:04:21.126 "max_r2t_per_connection": 4, 00:04:21.126 "pdu_pool_size": 36864, 00:04:21.126 "immediate_data_pool_size": 16384, 00:04:21.126 "data_out_pool_size": 2048 00:04:21.126 } 00:04:21.126 } 00:04:21.126 ] 00:04:21.126 } 00:04:21.126 ] 00:04:21.126 } 00:04:21.126 07:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:21.126 07:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3278193 00:04:21.126 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3278193 ']' 00:04:21.126 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3278193 00:04:21.126 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:21.126 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:21.126 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3278193 00:04:21.126 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:21.126 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:21.126 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3278193' 00:04:21.126 killing process with pid 3278193 00:04:21.126 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3278193 00:04:21.126 07:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3278193 00:04:21.387 07:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3278533 00:04:21.387 07:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:21.387 07:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3278533 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3278533 ']' 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3278533 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3278533 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3278533' 00:04:26.674 killing process with pid 3278533 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3278533 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3278533 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:26.674 00:04:26.674 real 0m6.566s 00:04:26.674 user 0m6.501s 00:04:26.674 sys 0m0.546s 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.674 ************************************ 00:04:26.674 END TEST skip_rpc_with_json 00:04:26.674 ************************************ 00:04:26.674 07:02:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:26.674 07:02:48 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.674 07:02:48 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.674 07:02:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.674 ************************************ 00:04:26.674 START TEST skip_rpc_with_delay 00:04:26.674 ************************************ 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.674 [2024-11-20 07:02:48.871791] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:26.674 00:04:26.674 real 0m0.088s 00:04:26.674 user 0m0.058s 00:04:26.674 sys 0m0.029s 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.674 07:02:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:26.674 ************************************ 00:04:26.674 END TEST skip_rpc_with_delay 00:04:26.674 ************************************ 00:04:26.675 07:02:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:26.675 07:02:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:26.675 07:02:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:26.675 07:02:48 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.675 07:02:48 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.675 07:02:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.935 ************************************ 00:04:26.935 START TEST exit_on_failed_rpc_init 00:04:26.935 ************************************ 00:04:26.935 07:02:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:26.935 07:02:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3279606 00:04:26.935 07:02:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3279606 00:04:26.935 07:02:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.935 07:02:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 3279606 ']' 00:04:26.936 07:02:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.936 07:02:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:26.936 07:02:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.936 07:02:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:26.936 07:02:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.936 [2024-11-20 07:02:49.031390] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:26.936 [2024-11-20 07:02:49.031452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279606 ] 00:04:26.936 [2024-11-20 07:02:49.119443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.936 [2024-11-20 07:02:49.154142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:27.877 07:02:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.877 [2024-11-20 07:02:49.897137] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:27.877 [2024-11-20 07:02:49.897195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279854 ] 00:04:27.877 [2024-11-20 07:02:49.982017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.877 [2024-11-20 07:02:50.018628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.877 [2024-11-20 07:02:50.018677] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:27.877 [2024-11-20 07:02:50.018687] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:27.877 [2024-11-20 07:02:50.018693] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3279606 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 3279606 ']' 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 3279606 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3279606 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3279606' 00:04:27.877 killing process with pid 3279606 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 3279606 00:04:27.877 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 3279606 00:04:28.139 00:04:28.139 real 0m1.335s 00:04:28.139 user 0m1.556s 00:04:28.139 sys 0m0.403s 00:04:28.139 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.139 07:02:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.139 ************************************ 00:04:28.139 END TEST exit_on_failed_rpc_init 00:04:28.139 ************************************ 00:04:28.139 07:02:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:28.139 00:04:28.139 real 0m13.774s 00:04:28.139 user 0m13.374s 00:04:28.139 sys 0m1.584s 00:04:28.139 07:02:50 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.139 07:02:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.139 ************************************ 00:04:28.139 END TEST skip_rpc 00:04:28.139 ************************************ 00:04:28.139 07:02:50 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:28.139 07:02:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.139 07:02:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.139 07:02:50 -- common/autotest_common.sh@10 -- # set +x 00:04:28.400 ************************************ 00:04:28.400 START TEST rpc_client 00:04:28.400 ************************************ 00:04:28.400 07:02:50 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:28.400 * Looking for test storage... 00:04:28.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:28.400 07:02:50 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:28.400 07:02:50 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:28.400 07:02:50 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:28.400 07:02:50 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:28.400 07:02:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.401 07:02:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.401 07:02:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.401 07:02:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:28.401 07:02:50 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.401 07:02:50 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:28.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.401 --rc genhtml_branch_coverage=1 00:04:28.401 --rc genhtml_function_coverage=1 00:04:28.401 --rc genhtml_legend=1 00:04:28.401 --rc geninfo_all_blocks=1 00:04:28.401 --rc geninfo_unexecuted_blocks=1 00:04:28.401 00:04:28.401 ' 00:04:28.401 07:02:50 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:28.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.401 --rc genhtml_branch_coverage=1 00:04:28.401 --rc genhtml_function_coverage=1 00:04:28.401 --rc genhtml_legend=1 00:04:28.401 --rc geninfo_all_blocks=1 00:04:28.401 --rc geninfo_unexecuted_blocks=1 00:04:28.401 00:04:28.401 ' 00:04:28.401 07:02:50 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:28.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.401 --rc genhtml_branch_coverage=1 00:04:28.401 --rc genhtml_function_coverage=1 00:04:28.401 --rc genhtml_legend=1 00:04:28.401 --rc geninfo_all_blocks=1 00:04:28.401 --rc geninfo_unexecuted_blocks=1 00:04:28.401 00:04:28.401 ' 00:04:28.401 07:02:50 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:28.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.401 --rc genhtml_branch_coverage=1 00:04:28.401 --rc genhtml_function_coverage=1 00:04:28.401 --rc genhtml_legend=1 00:04:28.401 --rc geninfo_all_blocks=1 00:04:28.401 --rc geninfo_unexecuted_blocks=1 00:04:28.401 00:04:28.401 ' 00:04:28.401 07:02:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:28.401 OK 00:04:28.401 07:02:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:28.401 00:04:28.401 real 0m0.223s 00:04:28.401 user 0m0.138s 00:04:28.401 sys 0m0.099s 00:04:28.401 07:02:50 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.401 07:02:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:28.401 ************************************ 00:04:28.401 END TEST rpc_client 00:04:28.401 ************************************ 00:04:28.663 07:02:50 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:28.663 07:02:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.663 07:02:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.663 07:02:50 -- common/autotest_common.sh@10 -- # set +x 00:04:28.663 ************************************ 00:04:28.663 START TEST json_config 00:04:28.663 ************************************ 00:04:28.663 07:02:50 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:28.663 07:02:50 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:28.663 07:02:50 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:28.663 07:02:50 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:28.663 07:02:50 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:28.663 07:02:50 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.663 07:02:50 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.663 07:02:50 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.663 07:02:50 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.663 07:02:50 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.663 07:02:50 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.663 07:02:50 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.663 07:02:50 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.663 07:02:50 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.663 07:02:50 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.663 07:02:50 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.663 07:02:50 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:28.663 07:02:50 json_config -- scripts/common.sh@345 -- # : 1 00:04:28.663 07:02:50 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.663 07:02:50 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.663 07:02:50 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:28.663 07:02:50 json_config -- scripts/common.sh@353 -- # local d=1 00:04:28.663 07:02:50 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.663 07:02:50 json_config -- scripts/common.sh@355 -- # echo 1 00:04:28.663 07:02:50 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.663 07:02:50 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:28.663 07:02:50 json_config -- scripts/common.sh@353 -- # local d=2 00:04:28.663 07:02:50 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.663 07:02:50 json_config -- scripts/common.sh@355 -- # echo 2 00:04:28.663 07:02:50 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.663 07:02:50 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.663 07:02:50 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.663 07:02:50 json_config -- scripts/common.sh@368 -- # return 0 00:04:28.663 07:02:50 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.663 07:02:50 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:28.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.663 --rc genhtml_branch_coverage=1 00:04:28.663 --rc genhtml_function_coverage=1 00:04:28.663 --rc genhtml_legend=1 00:04:28.663 --rc geninfo_all_blocks=1 00:04:28.663 --rc geninfo_unexecuted_blocks=1 00:04:28.663 00:04:28.663 ' 00:04:28.663 07:02:50 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:28.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.663 --rc genhtml_branch_coverage=1 00:04:28.663 --rc genhtml_function_coverage=1 00:04:28.663 --rc genhtml_legend=1 00:04:28.663 --rc geninfo_all_blocks=1 00:04:28.663 --rc geninfo_unexecuted_blocks=1 00:04:28.663 00:04:28.663 ' 00:04:28.663 07:02:50 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:28.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.663 --rc genhtml_branch_coverage=1 00:04:28.663 --rc genhtml_function_coverage=1 00:04:28.663 --rc genhtml_legend=1 00:04:28.663 --rc geninfo_all_blocks=1 00:04:28.663 --rc geninfo_unexecuted_blocks=1 00:04:28.663 00:04:28.663 ' 00:04:28.663 07:02:50 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:28.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.663 --rc genhtml_branch_coverage=1 00:04:28.663 --rc genhtml_function_coverage=1 00:04:28.663 --rc genhtml_legend=1 00:04:28.664 --rc geninfo_all_blocks=1 00:04:28.664 --rc geninfo_unexecuted_blocks=1 00:04:28.664 00:04:28.664 ' 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:28.664 07:02:50 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:28.664 07:02:50 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.664 07:02:50 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.664 07:02:50 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.664 07:02:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.664 07:02:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.664 07:02:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.664 07:02:50 json_config -- paths/export.sh@5 -- # export PATH 00:04:28.664 07:02:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@51 -- # : 0 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:28.664 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:28.664 07:02:50 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:28.664 INFO: JSON configuration test init 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:28.664 07:02:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:28.664 07:02:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.664 07:02:50 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:28.664 07:02:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:28.664 07:02:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.925 07:02:50 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:28.925 07:02:50 json_config -- json_config/common.sh@9 -- # local app=target 00:04:28.925 07:02:50 json_config -- json_config/common.sh@10 -- # shift 00:04:28.925 07:02:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.925 07:02:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.925 07:02:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.925 07:02:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.925 07:02:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.925 07:02:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3280070 00:04:28.925 07:02:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.925 Waiting for target to run... 00:04:28.925 07:02:50 json_config -- json_config/common.sh@25 -- # waitforlisten 3280070 /var/tmp/spdk_tgt.sock 00:04:28.925 07:02:50 json_config -- common/autotest_common.sh@833 -- # '[' -z 3280070 ']' 00:04:28.925 07:02:50 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.925 07:02:50 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:28.925 07:02:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:28.925 07:02:50 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.925 07:02:50 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:28.925 07:02:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.925 [2024-11-20 07:02:50.998359] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:28.925 [2024-11-20 07:02:50.998432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280070 ] 00:04:29.186 [2024-11-20 07:02:51.332253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.186 [2024-11-20 07:02:51.363217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.756 07:02:51 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:29.756 07:02:51 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:29.756 07:02:51 json_config -- json_config/common.sh@26 -- # echo '' 00:04:29.756 00:04:29.756 07:02:51 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:29.756 07:02:51 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:29.756 07:02:51 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:29.756 07:02:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.756 07:02:51 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:29.756 07:02:51 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:29.756 07:02:51 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:29.757 07:02:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.757 07:02:51 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:29.757 07:02:51 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:29.757 07:02:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:30.327 07:02:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:30.327 07:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:30.327 07:02:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@54 -- # sort 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:30.327 07:02:52 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:30.327 07:02:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:30.327 07:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.587 07:02:52 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:30.587 07:02:52 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:30.587 07:02:52 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:30.587 07:02:52 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:30.587 07:02:52 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:30.587 07:02:52 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:30.587 07:02:52 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:30.587 07:02:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:30.587 07:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.587 07:02:52 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:30.587 07:02:52 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:30.587 07:02:52 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:30.588 07:02:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:30.588 07:02:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:30.588 MallocForNvmf0 00:04:30.588 07:02:52 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:30.588 07:02:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:30.847 MallocForNvmf1 00:04:30.847 07:02:52 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.847 07:02:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:31.106 [2024-11-20 07:02:53.153540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.106 07:02:53 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:31.107 07:02:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:31.107 07:02:53 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:31.107 07:02:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:31.366 07:02:53 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:31.366 07:02:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:31.625 07:02:53 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.625 07:02:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.625 [2024-11-20 07:02:53.879763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.885 07:02:53 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:31.885 07:02:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.885 07:02:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.885 07:02:53 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:31.885 07:02:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.885 07:02:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.885 07:02:53 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:31.885 07:02:53 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.885 07:02:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.885 MallocBdevForConfigChangeCheck 00:04:32.144 07:02:54 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:32.144 07:02:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.144 07:02:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.144 07:02:54 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:32.144 07:02:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.404 07:02:54 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:32.404 INFO: shutting down applications... 00:04:32.404 07:02:54 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:32.404 07:02:54 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:32.404 07:02:54 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:32.404 07:02:54 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:32.975 Calling clear_iscsi_subsystem 00:04:32.975 Calling clear_nvmf_subsystem 00:04:32.975 Calling clear_nbd_subsystem 00:04:32.975 Calling clear_ublk_subsystem 00:04:32.975 Calling clear_vhost_blk_subsystem 00:04:32.975 Calling clear_vhost_scsi_subsystem 00:04:32.975 Calling clear_bdev_subsystem 00:04:32.975 07:02:54 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:32.975 07:02:54 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:32.975 07:02:54 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:32.975 07:02:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.975 07:02:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:32.975 07:02:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:33.235 07:02:55 json_config -- json_config/json_config.sh@352 -- # break 00:04:33.235 07:02:55 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:33.235 07:02:55 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:33.235 07:02:55 json_config -- json_config/common.sh@31 -- # local app=target 00:04:33.235 07:02:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.235 07:02:55 json_config -- json_config/common.sh@35 -- # [[ -n 3280070 ]] 00:04:33.235 07:02:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3280070 00:04:33.235 07:02:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.235 07:02:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.235 07:02:55 json_config -- json_config/common.sh@41 -- # kill -0 3280070 00:04:33.235 07:02:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.807 07:02:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.807 07:02:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.807 07:02:55 json_config -- json_config/common.sh@41 -- # kill -0 3280070 00:04:33.807 07:02:55 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.807 07:02:55 json_config -- json_config/common.sh@43 -- # break 00:04:33.807 07:02:55 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.807 07:02:55 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.807 SPDK target shutdown done 00:04:33.807 07:02:55 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:33.807 INFO: relaunching applications... 00:04:33.807 07:02:55 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.807 07:02:55 json_config -- json_config/common.sh@9 -- # local app=target 00:04:33.807 07:02:55 json_config -- json_config/common.sh@10 -- # shift 00:04:33.807 07:02:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.807 07:02:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.807 07:02:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.807 07:02:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.807 07:02:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.807 07:02:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3281210 00:04:33.807 07:02:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.807 Waiting for target to run... 00:04:33.807 07:02:55 json_config -- json_config/common.sh@25 -- # waitforlisten 3281210 /var/tmp/spdk_tgt.sock 00:04:33.807 07:02:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.807 07:02:55 json_config -- common/autotest_common.sh@833 -- # '[' -z 3281210 ']' 00:04:33.807 07:02:55 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.807 07:02:55 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:33.807 07:02:55 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.807 07:02:55 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:33.807 07:02:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.807 [2024-11-20 07:02:55.877111] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:33.807 [2024-11-20 07:02:55.877174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281210 ] 00:04:34.068 [2024-11-20 07:02:56.309973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.068 [2024-11-20 07:02:56.342983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.638 [2024-11-20 07:02:56.844077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.638 [2024-11-20 07:02:56.876463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.899 07:02:56 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:34.899 07:02:56 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:34.899 07:02:56 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.899 00:04:34.899 07:02:56 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:34.899 07:02:56 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:34.899 INFO: Checking if target configuration is the same... 00:04:34.899 07:02:56 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:34.899 07:02:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.899 07:02:56 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.899 + '[' 2 -ne 2 ']' 00:04:34.900 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:34.900 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:34.900 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:34.900 +++ basename /dev/fd/62 00:04:34.900 ++ mktemp /tmp/62.XXX 00:04:34.900 + tmp_file_1=/tmp/62.98k 00:04:34.900 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.900 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.900 + tmp_file_2=/tmp/spdk_tgt_config.json.n3x 00:04:34.900 + ret=0 00:04:34.900 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.160 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.160 + diff -u /tmp/62.98k /tmp/spdk_tgt_config.json.n3x 00:04:35.160 + echo 'INFO: JSON config files are the same' 00:04:35.160 INFO: JSON config files are the same 00:04:35.160 + rm /tmp/62.98k /tmp/spdk_tgt_config.json.n3x 00:04:35.160 + exit 0 00:04:35.160 07:02:57 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:35.160 07:02:57 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:35.160 INFO: changing configuration and checking if this can be detected... 00:04:35.160 07:02:57 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.160 07:02:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.420 07:02:57 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.420 07:02:57 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:35.420 07:02:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.420 + '[' 2 -ne 2 ']' 00:04:35.420 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:35.420 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:35.420 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:35.420 +++ basename /dev/fd/62 00:04:35.420 ++ mktemp /tmp/62.XXX 00:04:35.420 + tmp_file_1=/tmp/62.K6d 00:04:35.420 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.420 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.420 + tmp_file_2=/tmp/spdk_tgt_config.json.zj9 00:04:35.420 + ret=0 00:04:35.420 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.681 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.681 + diff -u /tmp/62.K6d /tmp/spdk_tgt_config.json.zj9 00:04:35.681 + ret=1 00:04:35.681 + echo '=== Start of file: /tmp/62.K6d ===' 00:04:35.681 + cat /tmp/62.K6d 00:04:35.681 + echo '=== End of file: /tmp/62.K6d ===' 00:04:35.681 + echo '' 00:04:35.681 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zj9 ===' 00:04:35.681 + cat /tmp/spdk_tgt_config.json.zj9 00:04:35.681 + echo '=== End of file: /tmp/spdk_tgt_config.json.zj9 ===' 00:04:35.681 + echo '' 00:04:35.681 + rm /tmp/62.K6d /tmp/spdk_tgt_config.json.zj9 00:04:35.681 + exit 1 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:35.681 INFO: configuration change detected. 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:35.681 07:02:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:35.681 07:02:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@324 -- # [[ -n 3281210 ]] 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:35.681 07:02:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:35.681 07:02:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:35.681 07:02:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:35.681 07:02:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.681 07:02:57 json_config -- json_config/json_config.sh@330 -- # killprocess 3281210 00:04:35.681 07:02:57 json_config -- common/autotest_common.sh@952 -- # '[' -z 3281210 ']' 00:04:35.681 07:02:57 json_config -- common/autotest_common.sh@956 -- # kill -0 3281210 00:04:35.681 07:02:57 json_config -- common/autotest_common.sh@957 -- # uname 00:04:35.681 07:02:57 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:35.681 07:02:57 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3281210 00:04:35.942 07:02:58 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:35.942 07:02:58 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:35.942 07:02:58 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3281210' 00:04:35.942 killing process with pid 3281210 00:04:35.942 07:02:58 json_config -- common/autotest_common.sh@971 -- # kill 3281210 00:04:35.942 07:02:58 json_config -- common/autotest_common.sh@976 -- # wait 3281210 00:04:36.204 07:02:58 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.204 07:02:58 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:36.204 07:02:58 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:36.204 07:02:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.204 07:02:58 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:36.204 07:02:58 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:36.204 INFO: Success 00:04:36.204 00:04:36.204 real 0m7.589s 00:04:36.204 user 0m9.010s 00:04:36.204 sys 0m2.152s 00:04:36.204 07:02:58 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:36.204 07:02:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.204 ************************************ 00:04:36.204 END TEST json_config 00:04:36.204 ************************************ 00:04:36.204 07:02:58 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:36.204 07:02:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:36.204 07:02:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:36.204 07:02:58 -- common/autotest_common.sh@10 -- # set +x 00:04:36.204 ************************************ 00:04:36.204 START TEST json_config_extra_key 00:04:36.204 ************************************ 00:04:36.204 07:02:58 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:36.204 07:02:58 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:36.204 07:02:58 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:36.204 07:02:58 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:36.466 07:02:58 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:36.466 07:02:58 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.466 07:02:58 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:36.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.466 --rc genhtml_branch_coverage=1 00:04:36.466 --rc genhtml_function_coverage=1 00:04:36.466 --rc genhtml_legend=1 00:04:36.466 --rc geninfo_all_blocks=1 00:04:36.466 --rc geninfo_unexecuted_blocks=1 00:04:36.466 00:04:36.466 ' 00:04:36.466 07:02:58 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:36.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.466 --rc genhtml_branch_coverage=1 00:04:36.466 --rc genhtml_function_coverage=1 00:04:36.466 --rc genhtml_legend=1 00:04:36.466 --rc geninfo_all_blocks=1 00:04:36.466 --rc geninfo_unexecuted_blocks=1 00:04:36.466 00:04:36.466 ' 00:04:36.466 07:02:58 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:36.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.466 --rc genhtml_branch_coverage=1 00:04:36.466 --rc genhtml_function_coverage=1 00:04:36.466 --rc genhtml_legend=1 00:04:36.466 --rc geninfo_all_blocks=1 00:04:36.466 --rc geninfo_unexecuted_blocks=1 00:04:36.466 00:04:36.466 ' 00:04:36.466 07:02:58 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:36.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.466 --rc genhtml_branch_coverage=1 00:04:36.466 --rc genhtml_function_coverage=1 00:04:36.466 --rc genhtml_legend=1 00:04:36.466 --rc geninfo_all_blocks=1 00:04:36.466 --rc geninfo_unexecuted_blocks=1 00:04:36.466 00:04:36.466 ' 00:04:36.466 07:02:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.466 07:02:58 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.466 07:02:58 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.467 07:02:58 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.467 07:02:58 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.467 07:02:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.467 07:02:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.467 07:02:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.467 07:02:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:36.467 07:02:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.467 07:02:58 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:36.467 07:02:58 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.467 07:02:58 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.467 07:02:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.467 07:02:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.467 07:02:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.467 07:02:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.467 07:02:58 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.467 07:02:58 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.467 07:02:58 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.467 07:02:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:36.467 07:02:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:36.467 07:02:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:36.467 07:02:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:36.467 07:02:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:36.467 07:02:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:36.467 07:02:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:36.467 07:02:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:36.467 07:02:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:36.467 07:02:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.467 07:02:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:36.467 INFO: launching applications... 00:04:36.467 07:02:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:36.467 07:02:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:36.467 07:02:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:36.467 07:02:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.467 07:02:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.467 07:02:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.467 07:02:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.467 07:02:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.467 07:02:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3282000 00:04:36.467 07:02:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.467 Waiting for target to run... 00:04:36.467 07:02:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3282000 /var/tmp/spdk_tgt.sock 00:04:36.467 07:02:58 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 3282000 ']' 00:04:36.467 07:02:58 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.467 07:02:58 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:36.467 07:02:58 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:36.467 07:02:58 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.467 07:02:58 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:36.467 07:02:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:36.467 [2024-11-20 07:02:58.660358] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:36.467 [2024-11-20 07:02:58.660435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282000 ] 00:04:36.728 [2024-11-20 07:02:58.928905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.728 [2024-11-20 07:02:58.952377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.299 07:02:59 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:37.299 07:02:59 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:37.299 07:02:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:37.299 00:04:37.299 07:02:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:37.299 INFO: shutting down applications... 00:04:37.300 07:02:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:37.300 07:02:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:37.300 07:02:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.300 07:02:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3282000 ]] 00:04:37.300 07:02:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3282000 00:04:37.300 07:02:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.300 07:02:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.300 07:02:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3282000 00:04:37.300 07:02:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.872 07:02:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.872 07:02:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.872 07:02:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3282000 00:04:37.872 07:02:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:37.872 07:02:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:37.872 07:02:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:37.872 07:02:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:37.872 SPDK target shutdown done 00:04:37.872 07:02:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:37.872 Success 00:04:37.872 00:04:37.872 real 0m1.566s 00:04:37.872 user 0m1.200s 00:04:37.872 sys 0m0.390s 00:04:37.872 07:02:59 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:37.872 07:02:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:37.872 ************************************ 00:04:37.872 END TEST json_config_extra_key 00:04:37.872 ************************************ 00:04:37.872 07:02:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.872 07:02:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:37.872 07:02:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:37.872 07:02:59 -- common/autotest_common.sh@10 -- # set +x 00:04:37.872 ************************************ 00:04:37.872 START TEST alias_rpc 00:04:37.872 ************************************ 00:04:37.872 07:03:00 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.872 * Looking for test storage... 00:04:37.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:37.872 07:03:00 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:37.872 07:03:00 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:37.872 07:03:00 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:38.134 07:03:00 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.134 07:03:00 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:38.134 07:03:00 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.134 07:03:00 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:38.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.134 --rc genhtml_branch_coverage=1 00:04:38.134 --rc genhtml_function_coverage=1 00:04:38.134 --rc genhtml_legend=1 00:04:38.134 --rc geninfo_all_blocks=1 00:04:38.134 --rc geninfo_unexecuted_blocks=1 00:04:38.134 00:04:38.134 ' 00:04:38.134 07:03:00 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:38.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.134 --rc genhtml_branch_coverage=1 00:04:38.134 --rc genhtml_function_coverage=1 00:04:38.134 --rc genhtml_legend=1 00:04:38.134 --rc geninfo_all_blocks=1 00:04:38.134 --rc geninfo_unexecuted_blocks=1 00:04:38.134 00:04:38.134 ' 00:04:38.134 07:03:00 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:38.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.134 --rc genhtml_branch_coverage=1 00:04:38.134 --rc genhtml_function_coverage=1 00:04:38.134 --rc genhtml_legend=1 00:04:38.134 --rc geninfo_all_blocks=1 00:04:38.134 --rc geninfo_unexecuted_blocks=1 00:04:38.134 00:04:38.134 ' 00:04:38.134 07:03:00 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:38.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.134 --rc genhtml_branch_coverage=1 00:04:38.134 --rc genhtml_function_coverage=1 00:04:38.134 --rc genhtml_legend=1 00:04:38.134 --rc geninfo_all_blocks=1 00:04:38.134 --rc geninfo_unexecuted_blocks=1 00:04:38.134 00:04:38.134 ' 00:04:38.134 07:03:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:38.134 07:03:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3282408 00:04:38.134 07:03:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3282408 00:04:38.134 07:03:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.134 07:03:00 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 3282408 ']' 00:04:38.134 07:03:00 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.134 07:03:00 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:38.134 07:03:00 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.134 07:03:00 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:38.134 07:03:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.134 [2024-11-20 07:03:00.301724] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:38.134 [2024-11-20 07:03:00.301796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282408 ] 00:04:38.134 [2024-11-20 07:03:00.390081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.395 [2024-11-20 07:03:00.433156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.966 07:03:01 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:38.966 07:03:01 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:38.966 07:03:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:39.226 07:03:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3282408 00:04:39.226 07:03:01 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 3282408 ']' 00:04:39.226 07:03:01 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 3282408 00:04:39.226 07:03:01 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:39.226 07:03:01 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:39.226 07:03:01 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3282408 00:04:39.226 07:03:01 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:39.226 07:03:01 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:39.226 07:03:01 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3282408' 00:04:39.226 killing process with pid 3282408 00:04:39.226 07:03:01 alias_rpc -- common/autotest_common.sh@971 -- # kill 3282408 00:04:39.226 07:03:01 alias_rpc -- common/autotest_common.sh@976 -- # wait 3282408 00:04:39.488 00:04:39.488 real 0m1.524s 00:04:39.488 user 0m1.681s 00:04:39.488 sys 0m0.430s 00:04:39.488 07:03:01 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:39.488 07:03:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.488 ************************************ 00:04:39.488 END TEST alias_rpc 00:04:39.488 ************************************ 00:04:39.488 07:03:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:39.488 07:03:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:39.488 07:03:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:39.488 07:03:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.488 07:03:01 -- common/autotest_common.sh@10 -- # set +x 00:04:39.488 ************************************ 00:04:39.488 START TEST spdkcli_tcp 00:04:39.488 ************************************ 00:04:39.488 07:03:01 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:39.488 * Looking for test storage... 00:04:39.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:39.488 07:03:01 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.488 07:03:01 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.488 07:03:01 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.750 07:03:01 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.750 07:03:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:39.750 07:03:01 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.750 07:03:01 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.750 --rc genhtml_branch_coverage=1 00:04:39.750 --rc genhtml_function_coverage=1 00:04:39.750 --rc genhtml_legend=1 00:04:39.750 --rc geninfo_all_blocks=1 00:04:39.750 --rc geninfo_unexecuted_blocks=1 00:04:39.750 00:04:39.750 ' 00:04:39.750 07:03:01 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.750 --rc genhtml_branch_coverage=1 00:04:39.750 --rc genhtml_function_coverage=1 00:04:39.750 --rc genhtml_legend=1 00:04:39.750 --rc geninfo_all_blocks=1 00:04:39.750 --rc geninfo_unexecuted_blocks=1 00:04:39.750 00:04:39.750 ' 00:04:39.750 07:03:01 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.750 --rc genhtml_branch_coverage=1 00:04:39.750 --rc genhtml_function_coverage=1 00:04:39.750 --rc genhtml_legend=1 00:04:39.750 --rc geninfo_all_blocks=1 00:04:39.750 --rc geninfo_unexecuted_blocks=1 00:04:39.750 00:04:39.750 ' 00:04:39.750 07:03:01 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.750 --rc genhtml_branch_coverage=1 00:04:39.750 --rc genhtml_function_coverage=1 00:04:39.750 --rc genhtml_legend=1 00:04:39.750 --rc geninfo_all_blocks=1 00:04:39.750 --rc geninfo_unexecuted_blocks=1 00:04:39.750 00:04:39.750 ' 00:04:39.750 07:03:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:39.750 07:03:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:39.750 07:03:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:39.750 07:03:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:39.750 07:03:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:39.750 07:03:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:39.750 07:03:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:39.750 07:03:01 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.750 07:03:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.750 07:03:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3282894 00:04:39.750 07:03:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3282894 00:04:39.750 07:03:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:39.750 07:03:01 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 3282894 ']' 00:04:39.750 07:03:01 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.750 07:03:01 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:39.750 07:03:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.750 07:03:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:39.750 07:03:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.750 [2024-11-20 07:03:01.914699] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:39.750 [2024-11-20 07:03:01.914770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282894 ] 00:04:39.750 [2024-11-20 07:03:02.003989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.011 [2024-11-20 07:03:02.046689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.011 [2024-11-20 07:03:02.046689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.583 07:03:02 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:40.583 07:03:02 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:40.583 07:03:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:40.583 07:03:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3282914 00:04:40.583 07:03:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:40.844 [ 00:04:40.844 "bdev_malloc_delete", 00:04:40.844 "bdev_malloc_create", 00:04:40.844 "bdev_null_resize", 00:04:40.844 "bdev_null_delete", 00:04:40.844 "bdev_null_create", 00:04:40.844 "bdev_nvme_cuse_unregister", 00:04:40.844 "bdev_nvme_cuse_register", 00:04:40.844 "bdev_opal_new_user", 00:04:40.844 "bdev_opal_set_lock_state", 00:04:40.844 "bdev_opal_delete", 00:04:40.844 "bdev_opal_get_info", 00:04:40.844 "bdev_opal_create", 00:04:40.844 "bdev_nvme_opal_revert", 00:04:40.844 "bdev_nvme_opal_init", 00:04:40.844 "bdev_nvme_send_cmd", 00:04:40.844 "bdev_nvme_set_keys", 00:04:40.844 "bdev_nvme_get_path_iostat", 00:04:40.844 "bdev_nvme_get_mdns_discovery_info", 00:04:40.844 "bdev_nvme_stop_mdns_discovery", 00:04:40.844 "bdev_nvme_start_mdns_discovery", 00:04:40.844 "bdev_nvme_set_multipath_policy", 00:04:40.844 "bdev_nvme_set_preferred_path", 00:04:40.844 "bdev_nvme_get_io_paths", 00:04:40.844 "bdev_nvme_remove_error_injection", 00:04:40.844 "bdev_nvme_add_error_injection", 00:04:40.844 "bdev_nvme_get_discovery_info", 00:04:40.844 "bdev_nvme_stop_discovery", 00:04:40.844 "bdev_nvme_start_discovery", 00:04:40.844 "bdev_nvme_get_controller_health_info", 00:04:40.844 "bdev_nvme_disable_controller", 00:04:40.844 "bdev_nvme_enable_controller", 00:04:40.844 "bdev_nvme_reset_controller", 00:04:40.844 "bdev_nvme_get_transport_statistics", 00:04:40.844 "bdev_nvme_apply_firmware", 00:04:40.844 "bdev_nvme_detach_controller", 00:04:40.844 "bdev_nvme_get_controllers", 00:04:40.844 "bdev_nvme_attach_controller", 00:04:40.844 "bdev_nvme_set_hotplug", 00:04:40.844 "bdev_nvme_set_options", 00:04:40.844 "bdev_passthru_delete", 00:04:40.844 "bdev_passthru_create", 00:04:40.844 "bdev_lvol_set_parent_bdev", 00:04:40.844 "bdev_lvol_set_parent", 00:04:40.844 "bdev_lvol_check_shallow_copy", 00:04:40.844 "bdev_lvol_start_shallow_copy", 00:04:40.844 "bdev_lvol_grow_lvstore", 00:04:40.844 "bdev_lvol_get_lvols", 00:04:40.844 "bdev_lvol_get_lvstores", 00:04:40.844 "bdev_lvol_delete", 00:04:40.844 "bdev_lvol_set_read_only", 00:04:40.844 "bdev_lvol_resize", 00:04:40.844 "bdev_lvol_decouple_parent", 00:04:40.845 "bdev_lvol_inflate", 00:04:40.845 "bdev_lvol_rename", 00:04:40.845 "bdev_lvol_clone_bdev", 00:04:40.845 "bdev_lvol_clone", 00:04:40.845 "bdev_lvol_snapshot", 00:04:40.845 "bdev_lvol_create", 00:04:40.845 "bdev_lvol_delete_lvstore", 00:04:40.845 "bdev_lvol_rename_lvstore", 00:04:40.845 "bdev_lvol_create_lvstore", 00:04:40.845 "bdev_raid_set_options", 00:04:40.845 "bdev_raid_remove_base_bdev", 00:04:40.845 "bdev_raid_add_base_bdev", 00:04:40.845 "bdev_raid_delete", 00:04:40.845 "bdev_raid_create", 00:04:40.845 "bdev_raid_get_bdevs", 00:04:40.845 "bdev_error_inject_error", 00:04:40.845 "bdev_error_delete", 00:04:40.845 "bdev_error_create", 00:04:40.845 "bdev_split_delete", 00:04:40.845 "bdev_split_create", 00:04:40.845 "bdev_delay_delete", 00:04:40.845 "bdev_delay_create", 00:04:40.845 "bdev_delay_update_latency", 00:04:40.845 "bdev_zone_block_delete", 00:04:40.845 "bdev_zone_block_create", 00:04:40.845 "blobfs_create", 00:04:40.845 "blobfs_detect", 00:04:40.845 "blobfs_set_cache_size", 00:04:40.845 "bdev_aio_delete", 00:04:40.845 "bdev_aio_rescan", 00:04:40.845 "bdev_aio_create", 00:04:40.845 "bdev_ftl_set_property", 00:04:40.845 "bdev_ftl_get_properties", 00:04:40.845 "bdev_ftl_get_stats", 00:04:40.845 "bdev_ftl_unmap", 00:04:40.845 "bdev_ftl_unload", 00:04:40.845 "bdev_ftl_delete", 00:04:40.845 "bdev_ftl_load", 00:04:40.845 "bdev_ftl_create", 00:04:40.845 "bdev_virtio_attach_controller", 00:04:40.845 "bdev_virtio_scsi_get_devices", 00:04:40.845 "bdev_virtio_detach_controller", 00:04:40.845 "bdev_virtio_blk_set_hotplug", 00:04:40.845 "bdev_iscsi_delete", 00:04:40.845 "bdev_iscsi_create", 00:04:40.845 "bdev_iscsi_set_options", 00:04:40.845 "accel_error_inject_error", 00:04:40.845 "ioat_scan_accel_module", 00:04:40.845 "dsa_scan_accel_module", 00:04:40.845 "iaa_scan_accel_module", 00:04:40.845 "vfu_virtio_create_fs_endpoint", 00:04:40.845 "vfu_virtio_create_scsi_endpoint", 00:04:40.845 "vfu_virtio_scsi_remove_target", 00:04:40.845 "vfu_virtio_scsi_add_target", 00:04:40.845 "vfu_virtio_create_blk_endpoint", 00:04:40.845 "vfu_virtio_delete_endpoint", 00:04:40.845 "keyring_file_remove_key", 00:04:40.845 "keyring_file_add_key", 00:04:40.845 "keyring_linux_set_options", 00:04:40.845 "fsdev_aio_delete", 00:04:40.845 "fsdev_aio_create", 00:04:40.845 "iscsi_get_histogram", 00:04:40.845 "iscsi_enable_histogram", 00:04:40.845 "iscsi_set_options", 00:04:40.845 "iscsi_get_auth_groups", 00:04:40.845 "iscsi_auth_group_remove_secret", 00:04:40.845 "iscsi_auth_group_add_secret", 00:04:40.845 "iscsi_delete_auth_group", 00:04:40.845 "iscsi_create_auth_group", 00:04:40.845 "iscsi_set_discovery_auth", 00:04:40.845 "iscsi_get_options", 00:04:40.845 "iscsi_target_node_request_logout", 00:04:40.845 "iscsi_target_node_set_redirect", 00:04:40.845 "iscsi_target_node_set_auth", 00:04:40.845 "iscsi_target_node_add_lun", 00:04:40.845 "iscsi_get_stats", 00:04:40.845 "iscsi_get_connections", 00:04:40.845 "iscsi_portal_group_set_auth", 00:04:40.845 "iscsi_start_portal_group", 00:04:40.845 "iscsi_delete_portal_group", 00:04:40.845 "iscsi_create_portal_group", 00:04:40.845 "iscsi_get_portal_groups", 00:04:40.845 "iscsi_delete_target_node", 00:04:40.845 "iscsi_target_node_remove_pg_ig_maps", 00:04:40.845 "iscsi_target_node_add_pg_ig_maps", 00:04:40.845 "iscsi_create_target_node", 00:04:40.845 "iscsi_get_target_nodes", 00:04:40.845 "iscsi_delete_initiator_group", 00:04:40.845 "iscsi_initiator_group_remove_initiators", 00:04:40.845 "iscsi_initiator_group_add_initiators", 00:04:40.845 "iscsi_create_initiator_group", 00:04:40.845 "iscsi_get_initiator_groups", 00:04:40.845 "nvmf_set_crdt", 00:04:40.845 "nvmf_set_config", 00:04:40.845 "nvmf_set_max_subsystems", 00:04:40.845 "nvmf_stop_mdns_prr", 00:04:40.845 "nvmf_publish_mdns_prr", 00:04:40.845 "nvmf_subsystem_get_listeners", 00:04:40.845 "nvmf_subsystem_get_qpairs", 00:04:40.845 "nvmf_subsystem_get_controllers", 00:04:40.845 "nvmf_get_stats", 00:04:40.845 "nvmf_get_transports", 00:04:40.845 "nvmf_create_transport", 00:04:40.845 "nvmf_get_targets", 00:04:40.845 "nvmf_delete_target", 00:04:40.845 "nvmf_create_target", 00:04:40.845 "nvmf_subsystem_allow_any_host", 00:04:40.845 "nvmf_subsystem_set_keys", 00:04:40.845 "nvmf_subsystem_remove_host", 00:04:40.845 "nvmf_subsystem_add_host", 00:04:40.845 "nvmf_ns_remove_host", 00:04:40.845 "nvmf_ns_add_host", 00:04:40.845 "nvmf_subsystem_remove_ns", 00:04:40.845 "nvmf_subsystem_set_ns_ana_group", 00:04:40.845 "nvmf_subsystem_add_ns", 00:04:40.845 "nvmf_subsystem_listener_set_ana_state", 00:04:40.845 "nvmf_discovery_get_referrals", 00:04:40.845 "nvmf_discovery_remove_referral", 00:04:40.845 "nvmf_discovery_add_referral", 00:04:40.845 "nvmf_subsystem_remove_listener", 00:04:40.845 "nvmf_subsystem_add_listener", 00:04:40.845 "nvmf_delete_subsystem", 00:04:40.845 "nvmf_create_subsystem", 00:04:40.845 "nvmf_get_subsystems", 00:04:40.845 "env_dpdk_get_mem_stats", 00:04:40.845 "nbd_get_disks", 00:04:40.845 "nbd_stop_disk", 00:04:40.845 "nbd_start_disk", 00:04:40.845 "ublk_recover_disk", 00:04:40.845 "ublk_get_disks", 00:04:40.845 "ublk_stop_disk", 00:04:40.845 "ublk_start_disk", 00:04:40.845 "ublk_destroy_target", 00:04:40.845 "ublk_create_target", 00:04:40.845 "virtio_blk_create_transport", 00:04:40.845 "virtio_blk_get_transports", 00:04:40.845 "vhost_controller_set_coalescing", 00:04:40.845 "vhost_get_controllers", 00:04:40.845 "vhost_delete_controller", 00:04:40.845 "vhost_create_blk_controller", 00:04:40.845 "vhost_scsi_controller_remove_target", 00:04:40.845 "vhost_scsi_controller_add_target", 00:04:40.845 "vhost_start_scsi_controller", 00:04:40.845 "vhost_create_scsi_controller", 00:04:40.845 "thread_set_cpumask", 00:04:40.845 "scheduler_set_options", 00:04:40.845 "framework_get_governor", 00:04:40.845 "framework_get_scheduler", 00:04:40.845 "framework_set_scheduler", 00:04:40.845 "framework_get_reactors", 00:04:40.845 "thread_get_io_channels", 00:04:40.845 "thread_get_pollers", 00:04:40.845 "thread_get_stats", 00:04:40.845 "framework_monitor_context_switch", 00:04:40.845 "spdk_kill_instance", 00:04:40.845 "log_enable_timestamps", 00:04:40.845 "log_get_flags", 00:04:40.845 "log_clear_flag", 00:04:40.845 "log_set_flag", 00:04:40.845 "log_get_level", 00:04:40.845 "log_set_level", 00:04:40.845 "log_get_print_level", 00:04:40.845 "log_set_print_level", 00:04:40.845 "framework_enable_cpumask_locks", 00:04:40.845 "framework_disable_cpumask_locks", 00:04:40.845 "framework_wait_init", 00:04:40.845 "framework_start_init", 00:04:40.845 "scsi_get_devices", 00:04:40.845 "bdev_get_histogram", 00:04:40.845 "bdev_enable_histogram", 00:04:40.845 "bdev_set_qos_limit", 00:04:40.845 "bdev_set_qd_sampling_period", 00:04:40.845 "bdev_get_bdevs", 00:04:40.845 "bdev_reset_iostat", 00:04:40.845 "bdev_get_iostat", 00:04:40.845 "bdev_examine", 00:04:40.845 "bdev_wait_for_examine", 00:04:40.845 "bdev_set_options", 00:04:40.845 "accel_get_stats", 00:04:40.845 "accel_set_options", 00:04:40.845 "accel_set_driver", 00:04:40.845 "accel_crypto_key_destroy", 00:04:40.845 "accel_crypto_keys_get", 00:04:40.845 "accel_crypto_key_create", 00:04:40.845 "accel_assign_opc", 00:04:40.845 "accel_get_module_info", 00:04:40.845 "accel_get_opc_assignments", 00:04:40.845 "vmd_rescan", 00:04:40.845 "vmd_remove_device", 00:04:40.845 "vmd_enable", 00:04:40.845 "sock_get_default_impl", 00:04:40.845 "sock_set_default_impl", 00:04:40.845 "sock_impl_set_options", 00:04:40.845 "sock_impl_get_options", 00:04:40.845 "iobuf_get_stats", 00:04:40.845 "iobuf_set_options", 00:04:40.845 "keyring_get_keys", 00:04:40.845 "vfu_tgt_set_base_path", 00:04:40.845 "framework_get_pci_devices", 00:04:40.845 "framework_get_config", 00:04:40.845 "framework_get_subsystems", 00:04:40.845 "fsdev_set_opts", 00:04:40.845 "fsdev_get_opts", 00:04:40.845 "trace_get_info", 00:04:40.845 "trace_get_tpoint_group_mask", 00:04:40.845 "trace_disable_tpoint_group", 00:04:40.845 "trace_enable_tpoint_group", 00:04:40.845 "trace_clear_tpoint_mask", 00:04:40.845 "trace_set_tpoint_mask", 00:04:40.845 "notify_get_notifications", 00:04:40.845 "notify_get_types", 00:04:40.845 "spdk_get_version", 00:04:40.845 "rpc_get_methods" 00:04:40.845 ] 00:04:40.845 07:03:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:40.845 07:03:02 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:40.845 07:03:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.845 07:03:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:40.845 07:03:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3282894 00:04:40.845 07:03:02 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 3282894 ']' 00:04:40.845 07:03:02 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 3282894 00:04:40.845 07:03:02 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:40.845 07:03:02 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:40.845 07:03:02 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3282894 00:04:40.845 07:03:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:40.845 07:03:03 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:40.846 07:03:03 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3282894' 00:04:40.846 killing process with pid 3282894 00:04:40.846 07:03:03 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 3282894 00:04:40.846 07:03:03 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 3282894 00:04:41.107 00:04:41.107 real 0m1.555s 00:04:41.107 user 0m2.830s 00:04:41.107 sys 0m0.476s 00:04:41.107 07:03:03 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.107 07:03:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.107 ************************************ 00:04:41.107 END TEST spdkcli_tcp 00:04:41.107 ************************************ 00:04:41.107 07:03:03 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.107 07:03:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.107 07:03:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.107 07:03:03 -- common/autotest_common.sh@10 -- # set +x 00:04:41.107 ************************************ 00:04:41.107 START TEST dpdk_mem_utility 00:04:41.107 ************************************ 00:04:41.107 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.107 * Looking for test storage... 00:04:41.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:41.107 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:41.107 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:41.107 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:41.369 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.369 07:03:03 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:41.369 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.369 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:41.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.369 --rc genhtml_branch_coverage=1 00:04:41.369 --rc genhtml_function_coverage=1 00:04:41.369 --rc genhtml_legend=1 00:04:41.369 --rc geninfo_all_blocks=1 00:04:41.369 --rc geninfo_unexecuted_blocks=1 00:04:41.369 00:04:41.369 ' 00:04:41.369 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:41.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.369 --rc genhtml_branch_coverage=1 00:04:41.369 --rc genhtml_function_coverage=1 00:04:41.369 --rc genhtml_legend=1 00:04:41.369 --rc geninfo_all_blocks=1 00:04:41.369 --rc geninfo_unexecuted_blocks=1 00:04:41.369 00:04:41.369 ' 00:04:41.369 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:41.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.369 --rc genhtml_branch_coverage=1 00:04:41.369 --rc genhtml_function_coverage=1 00:04:41.369 --rc genhtml_legend=1 00:04:41.369 --rc geninfo_all_blocks=1 00:04:41.369 --rc geninfo_unexecuted_blocks=1 00:04:41.369 00:04:41.369 ' 00:04:41.369 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:41.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.369 --rc genhtml_branch_coverage=1 00:04:41.369 --rc genhtml_function_coverage=1 00:04:41.369 --rc genhtml_legend=1 00:04:41.369 --rc geninfo_all_blocks=1 00:04:41.369 --rc geninfo_unexecuted_blocks=1 00:04:41.369 00:04:41.369 ' 00:04:41.369 07:03:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:41.369 07:03:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3283302 00:04:41.369 07:03:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3283302 00:04:41.369 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 3283302 ']' 00:04:41.369 07:03:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.369 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.369 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:41.369 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.370 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:41.370 07:03:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.370 [2024-11-20 07:03:03.533572] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:41.370 [2024-11-20 07:03:03.533641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283302 ] 00:04:41.370 [2024-11-20 07:03:03.621626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.630 [2024-11-20 07:03:03.662298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.201 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:42.201 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:42.201 07:03:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:42.201 07:03:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:42.201 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.201 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.201 { 00:04:42.201 "filename": "/tmp/spdk_mem_dump.txt" 00:04:42.201 } 00:04:42.201 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.201 07:03:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:42.201 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:42.201 1 heaps totaling size 810.000000 MiB 00:04:42.201 size: 810.000000 MiB heap id: 0 00:04:42.201 end heaps---------- 00:04:42.201 9 mempools totaling size 595.772034 MiB 00:04:42.201 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:42.201 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:42.201 size: 92.545471 MiB name: bdev_io_3283302 00:04:42.201 size: 50.003479 MiB name: msgpool_3283302 00:04:42.201 size: 36.509338 MiB name: fsdev_io_3283302 00:04:42.201 size: 21.763794 MiB name: PDU_Pool 00:04:42.201 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:42.201 size: 4.133484 MiB name: evtpool_3283302 00:04:42.201 size: 0.026123 MiB name: Session_Pool 00:04:42.201 end mempools------- 00:04:42.201 6 memzones totaling size 4.142822 MiB 00:04:42.201 size: 1.000366 MiB name: RG_ring_0_3283302 00:04:42.201 size: 1.000366 MiB name: RG_ring_1_3283302 00:04:42.201 size: 1.000366 MiB name: RG_ring_4_3283302 00:04:42.201 size: 1.000366 MiB name: RG_ring_5_3283302 00:04:42.201 size: 0.125366 MiB name: RG_ring_2_3283302 00:04:42.201 size: 0.015991 MiB name: RG_ring_3_3283302 00:04:42.201 end memzones------- 00:04:42.201 07:03:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:42.201 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:42.201 list of free elements. size: 10.862488 MiB 00:04:42.201 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:42.201 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:42.201 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:42.201 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:42.201 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:42.201 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:42.201 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:42.201 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:42.201 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:42.201 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:42.201 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:42.201 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:42.201 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:42.201 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:42.201 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:42.201 list of standard malloc elements. size: 199.218628 MiB 00:04:42.201 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:42.201 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:42.201 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:42.201 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:42.201 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:42.201 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:42.201 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:42.201 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:42.201 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:42.201 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:42.201 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:42.201 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:42.201 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:42.201 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:42.201 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:42.201 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:42.201 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:42.201 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:42.201 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:42.201 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:42.201 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:42.201 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:42.201 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:42.201 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:42.201 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:42.201 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:42.201 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:42.201 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:42.201 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:42.201 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:42.201 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:42.201 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:42.201 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:42.201 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:42.201 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:42.201 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:42.201 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:42.201 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:42.201 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:42.201 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:42.201 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:42.201 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:42.201 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:42.201 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:42.201 list of memzone associated elements. size: 599.918884 MiB 00:04:42.201 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:42.201 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:42.201 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:42.201 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:42.201 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:42.201 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3283302_0 00:04:42.201 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:42.201 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3283302_0 00:04:42.201 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:42.201 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3283302_0 00:04:42.201 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:42.201 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:42.201 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:42.201 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:42.201 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:42.201 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3283302_0 00:04:42.201 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:42.202 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3283302 00:04:42.202 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:42.202 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3283302 00:04:42.202 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:42.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:42.202 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:42.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:42.202 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:42.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:42.202 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:42.202 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:42.202 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:42.202 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3283302 00:04:42.202 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:42.202 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3283302 00:04:42.202 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:42.202 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3283302 00:04:42.202 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:42.202 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3283302 00:04:42.202 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:42.202 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3283302 00:04:42.202 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:42.202 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3283302 00:04:42.202 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:42.202 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:42.202 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:42.202 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:42.202 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:42.202 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:42.202 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:42.202 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3283302 00:04:42.202 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:42.202 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3283302 00:04:42.202 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:42.202 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:42.202 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:42.202 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:42.202 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:42.202 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3283302 00:04:42.202 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:42.202 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:42.202 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:42.202 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3283302 00:04:42.202 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:42.202 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3283302 00:04:42.202 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:42.202 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3283302 00:04:42.202 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:42.202 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:42.202 07:03:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:42.202 07:03:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3283302 00:04:42.202 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 3283302 ']' 00:04:42.202 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 3283302 00:04:42.202 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:42.202 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:42.202 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3283302 00:04:42.463 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:42.463 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:42.463 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3283302' 00:04:42.463 killing process with pid 3283302 00:04:42.463 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 3283302 00:04:42.463 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 3283302 00:04:42.463 00:04:42.463 real 0m1.395s 00:04:42.463 user 0m1.458s 00:04:42.463 sys 0m0.424s 00:04:42.463 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.463 07:03:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.463 ************************************ 00:04:42.463 END TEST dpdk_mem_utility 00:04:42.463 ************************************ 00:04:42.463 07:03:04 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:42.463 07:03:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.463 07:03:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.463 07:03:04 -- common/autotest_common.sh@10 -- # set +x 00:04:42.725 ************************************ 00:04:42.725 START TEST event 00:04:42.725 ************************************ 00:04:42.725 07:03:04 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:42.725 * Looking for test storage... 00:04:42.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:42.725 07:03:04 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:42.725 07:03:04 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:42.725 07:03:04 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:42.725 07:03:04 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:42.725 07:03:04 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.725 07:03:04 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.725 07:03:04 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.725 07:03:04 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.725 07:03:04 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.725 07:03:04 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.725 07:03:04 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.725 07:03:04 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.725 07:03:04 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.725 07:03:04 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.725 07:03:04 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.725 07:03:04 event -- scripts/common.sh@344 -- # case "$op" in 00:04:42.725 07:03:04 event -- scripts/common.sh@345 -- # : 1 00:04:42.725 07:03:04 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.725 07:03:04 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.725 07:03:04 event -- scripts/common.sh@365 -- # decimal 1 00:04:42.725 07:03:04 event -- scripts/common.sh@353 -- # local d=1 00:04:42.725 07:03:04 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.725 07:03:04 event -- scripts/common.sh@355 -- # echo 1 00:04:42.725 07:03:04 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.725 07:03:04 event -- scripts/common.sh@366 -- # decimal 2 00:04:42.726 07:03:04 event -- scripts/common.sh@353 -- # local d=2 00:04:42.726 07:03:04 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.726 07:03:04 event -- scripts/common.sh@355 -- # echo 2 00:04:42.726 07:03:04 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.726 07:03:04 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.726 07:03:04 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.726 07:03:04 event -- scripts/common.sh@368 -- # return 0 00:04:42.726 07:03:04 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.726 07:03:04 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:42.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.726 --rc genhtml_branch_coverage=1 00:04:42.726 --rc genhtml_function_coverage=1 00:04:42.726 --rc genhtml_legend=1 00:04:42.726 --rc geninfo_all_blocks=1 00:04:42.726 --rc geninfo_unexecuted_blocks=1 00:04:42.726 00:04:42.726 ' 00:04:42.726 07:03:04 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:42.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.726 --rc genhtml_branch_coverage=1 00:04:42.726 --rc genhtml_function_coverage=1 00:04:42.726 --rc genhtml_legend=1 00:04:42.726 --rc geninfo_all_blocks=1 00:04:42.726 --rc geninfo_unexecuted_blocks=1 00:04:42.726 00:04:42.726 ' 00:04:42.726 07:03:04 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:42.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.726 --rc genhtml_branch_coverage=1 00:04:42.726 --rc genhtml_function_coverage=1 00:04:42.726 --rc genhtml_legend=1 00:04:42.726 --rc geninfo_all_blocks=1 00:04:42.726 --rc geninfo_unexecuted_blocks=1 00:04:42.726 00:04:42.726 ' 00:04:42.726 07:03:04 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:42.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.726 --rc genhtml_branch_coverage=1 00:04:42.726 --rc genhtml_function_coverage=1 00:04:42.726 --rc genhtml_legend=1 00:04:42.726 --rc geninfo_all_blocks=1 00:04:42.726 --rc geninfo_unexecuted_blocks=1 00:04:42.726 00:04:42.726 ' 00:04:42.726 07:03:04 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:42.726 07:03:04 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:42.726 07:03:04 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:42.726 07:03:04 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:42.726 07:03:04 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.726 07:03:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.726 ************************************ 00:04:42.726 START TEST event_perf 00:04:42.726 ************************************ 00:04:42.726 07:03:04 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:42.987 Running I/O for 1 seconds...[2024-11-20 07:03:05.014044] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:42.987 [2024-11-20 07:03:05.014155] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283600 ] 00:04:42.987 [2024-11-20 07:03:05.104056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:42.987 [2024-11-20 07:03:05.148605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.987 [2024-11-20 07:03:05.148762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.987 [2024-11-20 07:03:05.148917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.987 Running I/O for 1 seconds...[2024-11-20 07:03:05.148918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.930 00:04:43.930 lcore 0: 177394 00:04:43.930 lcore 1: 177397 00:04:43.930 lcore 2: 177396 00:04:43.930 lcore 3: 177397 00:04:43.930 done. 00:04:43.930 00:04:43.930 real 0m1.184s 00:04:43.930 user 0m4.097s 00:04:43.930 sys 0m0.084s 00:04:43.930 07:03:06 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:43.930 07:03:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:43.930 ************************************ 00:04:43.930 END TEST event_perf 00:04:43.930 ************************************ 00:04:44.191 07:03:06 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:44.191 07:03:06 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:44.191 07:03:06 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.191 07:03:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.191 ************************************ 00:04:44.191 START TEST event_reactor 00:04:44.191 ************************************ 00:04:44.191 07:03:06 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:44.191 [2024-11-20 07:03:06.271990] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:44.191 [2024-11-20 07:03:06.272096] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283755 ] 00:04:44.191 [2024-11-20 07:03:06.357761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.191 [2024-11-20 07:03:06.396784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.265 test_start 00:04:45.265 oneshot 00:04:45.265 tick 100 00:04:45.265 tick 100 00:04:45.265 tick 250 00:04:45.265 tick 100 00:04:45.265 tick 100 00:04:45.265 tick 100 00:04:45.265 tick 250 00:04:45.265 tick 500 00:04:45.265 tick 100 00:04:45.265 tick 100 00:04:45.265 tick 250 00:04:45.265 tick 100 00:04:45.265 tick 100 00:04:45.265 test_end 00:04:45.265 00:04:45.265 real 0m1.171s 00:04:45.265 user 0m1.088s 00:04:45.265 sys 0m0.079s 00:04:45.265 07:03:07 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:45.265 07:03:07 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:45.265 ************************************ 00:04:45.265 END TEST event_reactor 00:04:45.265 ************************************ 00:04:45.265 07:03:07 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:45.265 07:03:07 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:45.265 07:03:07 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.265 07:03:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.542 ************************************ 00:04:45.542 START TEST event_reactor_perf 00:04:45.542 ************************************ 00:04:45.542 07:03:07 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:45.542 [2024-11-20 07:03:07.524469] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:45.542 [2024-11-20 07:03:07.524569] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284106 ] 00:04:45.542 [2024-11-20 07:03:07.613584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.542 [2024-11-20 07:03:07.651336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.517 test_start 00:04:46.517 test_end 00:04:46.518 Performance: 539856 events per second 00:04:46.518 00:04:46.518 real 0m1.176s 00:04:46.518 user 0m1.093s 00:04:46.518 sys 0m0.078s 00:04:46.518 07:03:08 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.518 07:03:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:46.518 ************************************ 00:04:46.518 END TEST event_reactor_perf 00:04:46.518 ************************************ 00:04:46.518 07:03:08 event -- event/event.sh@49 -- # uname -s 00:04:46.518 07:03:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:46.518 07:03:08 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:46.518 07:03:08 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.518 07:03:08 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.518 07:03:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.518 ************************************ 00:04:46.518 START TEST event_scheduler 00:04:46.518 ************************************ 00:04:46.518 07:03:08 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:46.778 * Looking for test storage... 00:04:46.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.778 07:03:08 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:46.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.778 --rc genhtml_branch_coverage=1 00:04:46.778 --rc genhtml_function_coverage=1 00:04:46.778 --rc genhtml_legend=1 00:04:46.778 --rc geninfo_all_blocks=1 00:04:46.778 --rc geninfo_unexecuted_blocks=1 00:04:46.778 00:04:46.778 ' 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:46.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.778 --rc genhtml_branch_coverage=1 00:04:46.778 --rc genhtml_function_coverage=1 00:04:46.778 --rc genhtml_legend=1 00:04:46.778 --rc geninfo_all_blocks=1 00:04:46.778 --rc geninfo_unexecuted_blocks=1 00:04:46.778 00:04:46.778 ' 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:46.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.778 --rc genhtml_branch_coverage=1 00:04:46.778 --rc genhtml_function_coverage=1 00:04:46.778 --rc genhtml_legend=1 00:04:46.778 --rc geninfo_all_blocks=1 00:04:46.778 --rc geninfo_unexecuted_blocks=1 00:04:46.778 00:04:46.778 ' 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:46.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.778 --rc genhtml_branch_coverage=1 00:04:46.778 --rc genhtml_function_coverage=1 00:04:46.778 --rc genhtml_legend=1 00:04:46.778 --rc geninfo_all_blocks=1 00:04:46.778 --rc geninfo_unexecuted_blocks=1 00:04:46.778 00:04:46.778 ' 00:04:46.778 07:03:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:46.778 07:03:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3284646 00:04:46.778 07:03:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.778 07:03:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3284646 00:04:46.778 07:03:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 3284646 ']' 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:46.778 07:03:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.778 [2024-11-20 07:03:09.020136] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:46.778 [2024-11-20 07:03:09.020215] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284646 ] 00:04:47.037 [2024-11-20 07:03:09.113909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.037 [2024-11-20 07:03:09.169596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.037 [2024-11-20 07:03:09.169757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.037 [2024-11-20 07:03:09.169914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.037 [2024-11-20 07:03:09.169915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.605 07:03:09 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.605 07:03:09 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:47.605 07:03:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:47.605 07:03:09 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.605 07:03:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.605 [2024-11-20 07:03:09.840315] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:47.605 [2024-11-20 07:03:09.840334] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:47.605 [2024-11-20 07:03:09.840345] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:47.605 [2024-11-20 07:03:09.840351] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:47.605 [2024-11-20 07:03:09.840357] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:47.605 07:03:09 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.605 07:03:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:47.605 07:03:09 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.606 07:03:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.866 [2024-11-20 07:03:09.907154] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:47.866 07:03:09 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.866 07:03:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:47.866 07:03:09 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.866 07:03:09 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.866 07:03:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.866 ************************************ 00:04:47.866 START TEST scheduler_create_thread 00:04:47.866 ************************************ 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.866 2 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.866 3 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.866 4 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.866 5 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.866 07:03:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.866 6 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.866 7 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.866 8 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.866 9 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.866 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.438 10 00:04:48.438 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.438 07:03:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:48.438 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.438 07:03:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.821 07:03:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.821 07:03:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:49.821 07:03:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:49.821 07:03:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.821 07:03:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.388 07:03:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.388 07:03:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:50.388 07:03:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.388 07:03:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.327 07:03:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.327 07:03:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:51.327 07:03:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:51.327 07:03:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.327 07:03:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.267 07:03:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.267 00:04:52.267 real 0m4.223s 00:04:52.267 user 0m0.027s 00:04:52.267 sys 0m0.005s 00:04:52.267 07:03:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.267 07:03:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.267 ************************************ 00:04:52.267 END TEST scheduler_create_thread 00:04:52.267 ************************************ 00:04:52.267 07:03:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:52.267 07:03:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3284646 00:04:52.267 07:03:14 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 3284646 ']' 00:04:52.267 07:03:14 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 3284646 00:04:52.267 07:03:14 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:52.267 07:03:14 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:52.267 07:03:14 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3284646 00:04:52.267 07:03:14 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:52.267 07:03:14 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:52.267 07:03:14 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3284646' 00:04:52.267 killing process with pid 3284646 00:04:52.267 07:03:14 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 3284646 00:04:52.267 07:03:14 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 3284646 00:04:52.527 [2024-11-20 07:03:14.553096] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:52.527 00:04:52.527 real 0m5.947s 00:04:52.527 user 0m13.838s 00:04:52.527 sys 0m0.445s 00:04:52.527 07:03:14 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.527 07:03:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.527 ************************************ 00:04:52.527 END TEST event_scheduler 00:04:52.527 ************************************ 00:04:52.527 07:03:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:52.527 07:03:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:52.527 07:03:14 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:52.527 07:03:14 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.527 07:03:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.527 ************************************ 00:04:52.527 START TEST app_repeat 00:04:52.527 ************************************ 00:04:52.527 07:03:14 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:52.527 07:03:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.527 07:03:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.527 07:03:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:52.527 07:03:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.527 07:03:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:52.527 07:03:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:52.527 07:03:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:52.788 07:03:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3286060 00:04:52.788 07:03:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.788 07:03:14 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:52.788 07:03:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3286060' 00:04:52.788 Process app_repeat pid: 3286060 00:04:52.788 07:03:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.788 07:03:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:52.788 spdk_app_start Round 0 00:04:52.788 07:03:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3286060 /var/tmp/spdk-nbd.sock 00:04:52.788 07:03:14 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3286060 ']' 00:04:52.788 07:03:14 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.788 07:03:14 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.788 07:03:14 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.788 07:03:14 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.788 07:03:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.788 [2024-11-20 07:03:14.832435] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:52.788 [2024-11-20 07:03:14.832522] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286060 ] 00:04:52.788 [2024-11-20 07:03:14.917085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.788 [2024-11-20 07:03:14.950232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.788 [2024-11-20 07:03:14.950370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.788 07:03:15 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:52.788 07:03:15 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:52.788 07:03:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.048 Malloc0 00:04:53.048 07:03:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.309 Malloc1 00:04:53.309 07:03:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.309 07:03:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.309 /dev/nbd0 00:04:53.569 07:03:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.569 07:03:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.569 07:03:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:53.569 07:03:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:53.569 07:03:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:53.569 07:03:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:53.569 07:03:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:53.569 07:03:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:53.569 07:03:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:53.569 07:03:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:53.569 07:03:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.569 1+0 records in 00:04:53.569 1+0 records out 00:04:53.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273897 s, 15.0 MB/s 00:04:53.569 07:03:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.569 07:03:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:53.569 07:03:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.569 07:03:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:53.569 07:03:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:53.569 07:03:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.569 07:03:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.569 07:03:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:53.569 /dev/nbd1 00:04:53.569 07:03:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:53.829 07:03:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:53.829 07:03:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:53.829 07:03:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:53.829 07:03:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:53.829 07:03:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:53.829 07:03:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:53.829 07:03:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:53.829 07:03:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:53.829 07:03:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:53.829 07:03:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.829 1+0 records in 00:04:53.829 1+0 records out 00:04:53.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272708 s, 15.0 MB/s 00:04:53.829 07:03:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.829 07:03:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:53.829 07:03:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.829 07:03:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:53.829 07:03:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:53.829 07:03:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.829 07:03:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.829 07:03:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.829 07:03:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.829 07:03:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.829 07:03:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.829 { 00:04:53.829 "nbd_device": "/dev/nbd0", 00:04:53.829 "bdev_name": "Malloc0" 00:04:53.829 }, 00:04:53.829 { 00:04:53.829 "nbd_device": "/dev/nbd1", 00:04:53.829 "bdev_name": "Malloc1" 00:04:53.829 } 00:04:53.829 ]' 00:04:53.829 07:03:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.829 { 00:04:53.829 "nbd_device": "/dev/nbd0", 00:04:53.829 "bdev_name": "Malloc0" 00:04:53.829 }, 00:04:53.829 { 00:04:53.829 "nbd_device": "/dev/nbd1", 00:04:53.829 "bdev_name": "Malloc1" 00:04:53.829 } 00:04:53.829 ]' 00:04:53.829 07:03:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.829 07:03:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.829 /dev/nbd1' 00:04:53.829 07:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.829 /dev/nbd1' 00:04:53.829 07:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.089 256+0 records in 00:04:54.089 256+0 records out 00:04:54.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012038 s, 87.1 MB/s 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.089 256+0 records in 00:04:54.089 256+0 records out 00:04:54.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120122 s, 87.3 MB/s 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.089 256+0 records in 00:04:54.089 256+0 records out 00:04:54.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012744 s, 82.3 MB/s 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.089 07:03:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.350 07:03:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.611 07:03:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.611 07:03:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.611 07:03:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.611 07:03:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.611 07:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.611 07:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.611 07:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:54.611 07:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.611 07:03:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.611 07:03:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.611 07:03:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.611 07:03:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.611 07:03:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.872 07:03:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.872 [2024-11-20 07:03:17.053759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.872 [2024-11-20 07:03:17.083037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.872 [2024-11-20 07:03:17.083037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.872 [2024-11-20 07:03:17.112165] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.872 [2024-11-20 07:03:17.112197] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.169 07:03:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:58.169 07:03:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:58.169 spdk_app_start Round 1 00:04:58.169 07:03:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3286060 /var/tmp/spdk-nbd.sock 00:04:58.169 07:03:19 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3286060 ']' 00:04:58.169 07:03:19 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.169 07:03:19 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:58.169 07:03:19 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.169 07:03:19 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:58.169 07:03:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.169 07:03:20 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:58.169 07:03:20 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:58.169 07:03:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.169 Malloc0 00:04:58.169 07:03:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.429 Malloc1 00:04:58.429 07:03:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.429 /dev/nbd0 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.429 07:03:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.690 07:03:20 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:58.690 07:03:20 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:58.690 07:03:20 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:58.690 07:03:20 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:58.690 07:03:20 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:58.690 07:03:20 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:58.690 07:03:20 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:58.690 07:03:20 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:58.690 07:03:20 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.690 1+0 records in 00:04:58.690 1+0 records out 00:04:58.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280511 s, 14.6 MB/s 00:04:58.690 07:03:20 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.690 07:03:20 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:58.690 07:03:20 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.690 07:03:20 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:58.690 07:03:20 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:58.690 07:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.690 07:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.690 07:03:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.690 /dev/nbd1 00:04:58.691 07:03:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.691 07:03:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.691 07:03:20 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:58.691 07:03:20 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:58.691 07:03:20 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:58.691 07:03:20 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:58.691 07:03:20 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:58.691 07:03:20 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:58.691 07:03:20 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:58.691 07:03:20 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:58.691 07:03:20 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.691 1+0 records in 00:04:58.691 1+0 records out 00:04:58.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226058 s, 18.1 MB/s 00:04:58.691 07:03:20 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.691 07:03:20 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:58.691 07:03:20 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.691 07:03:20 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:58.691 07:03:20 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:58.691 07:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.691 07:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.691 07:03:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.691 07:03:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.691 07:03:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.951 07:03:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.952 { 00:04:58.952 "nbd_device": "/dev/nbd0", 00:04:58.952 "bdev_name": "Malloc0" 00:04:58.952 }, 00:04:58.952 { 00:04:58.952 "nbd_device": "/dev/nbd1", 00:04:58.952 "bdev_name": "Malloc1" 00:04:58.952 } 00:04:58.952 ]' 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.952 { 00:04:58.952 "nbd_device": "/dev/nbd0", 00:04:58.952 "bdev_name": "Malloc0" 00:04:58.952 }, 00:04:58.952 { 00:04:58.952 "nbd_device": "/dev/nbd1", 00:04:58.952 "bdev_name": "Malloc1" 00:04:58.952 } 00:04:58.952 ]' 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.952 /dev/nbd1' 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.952 /dev/nbd1' 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.952 256+0 records in 00:04:58.952 256+0 records out 00:04:58.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127707 s, 82.1 MB/s 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.952 256+0 records in 00:04:58.952 256+0 records out 00:04:58.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123271 s, 85.1 MB/s 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.952 07:03:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.212 256+0 records in 00:04:59.212 256+0 records out 00:04:59.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129323 s, 81.1 MB/s 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.212 07:03:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:59.472 07:03:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:59.472 07:03:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:59.472 07:03:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:59.472 07:03:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.472 07:03:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.472 07:03:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:59.472 07:03:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.472 07:03:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.472 07:03:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.472 07:03:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.472 07:03:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.731 07:03:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.731 07:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.731 07:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.731 07:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.731 07:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.731 07:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.731 07:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.731 07:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.731 07:03:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.731 07:03:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.731 07:03:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.731 07:03:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.732 07:03:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.991 07:03:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.991 [2024-11-20 07:03:22.131870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.991 [2024-11-20 07:03:22.161356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.991 [2024-11-20 07:03:22.161357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.991 [2024-11-20 07:03:22.191035] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.991 [2024-11-20 07:03:22.191065] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.283 07:03:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.283 07:03:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:03.283 spdk_app_start Round 2 00:05:03.283 07:03:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3286060 /var/tmp/spdk-nbd.sock 00:05:03.283 07:03:25 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3286060 ']' 00:05:03.283 07:03:25 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.283 07:03:25 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.283 07:03:25 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.283 07:03:25 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.283 07:03:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.283 07:03:25 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:03.283 07:03:25 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:03.284 07:03:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.284 Malloc0 00:05:03.284 07:03:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.543 Malloc1 00:05:03.543 07:03:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.543 07:03:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.543 /dev/nbd0 00:05:03.803 07:03:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.803 07:03:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.803 07:03:25 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:03.803 07:03:25 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:03.803 07:03:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:03.803 07:03:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:03.803 07:03:25 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:03.803 07:03:25 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:03.803 07:03:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:03.803 07:03:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:03.803 07:03:25 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.803 1+0 records in 00:05:03.803 1+0 records out 00:05:03.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291785 s, 14.0 MB/s 00:05:03.803 07:03:25 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.803 07:03:25 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:03.803 07:03:25 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.803 07:03:25 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:03.803 07:03:25 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:03.803 07:03:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.803 07:03:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.804 07:03:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.804 /dev/nbd1 00:05:03.804 07:03:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.804 07:03:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.804 07:03:26 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:03.804 07:03:26 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:03.804 07:03:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:03.804 07:03:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:03.804 07:03:26 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:03.804 07:03:26 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:03.804 07:03:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:03.804 07:03:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:03.804 07:03:26 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.804 1+0 records in 00:05:03.804 1+0 records out 00:05:03.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319589 s, 12.8 MB/s 00:05:03.804 07:03:26 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.804 07:03:26 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:03.804 07:03:26 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.064 07:03:26 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:04.064 07:03:26 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.064 { 00:05:04.064 "nbd_device": "/dev/nbd0", 00:05:04.064 "bdev_name": "Malloc0" 00:05:04.064 }, 00:05:04.064 { 00:05:04.064 "nbd_device": "/dev/nbd1", 00:05:04.064 "bdev_name": "Malloc1" 00:05:04.064 } 00:05:04.064 ]' 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.064 { 00:05:04.064 "nbd_device": "/dev/nbd0", 00:05:04.064 "bdev_name": "Malloc0" 00:05:04.064 }, 00:05:04.064 { 00:05:04.064 "nbd_device": "/dev/nbd1", 00:05:04.064 "bdev_name": "Malloc1" 00:05:04.064 } 00:05:04.064 ]' 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.064 /dev/nbd1' 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.064 /dev/nbd1' 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.064 256+0 records in 00:05:04.064 256+0 records out 00:05:04.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127189 s, 82.4 MB/s 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.064 07:03:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.324 256+0 records in 00:05:04.324 256+0 records out 00:05:04.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125152 s, 83.8 MB/s 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.324 256+0 records in 00:05:04.324 256+0 records out 00:05:04.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129357 s, 81.1 MB/s 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.324 07:03:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.584 07:03:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.584 07:03:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.584 07:03:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.584 07:03:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.584 07:03:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.584 07:03:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.584 07:03:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.584 07:03:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.584 07:03:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.584 07:03:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.584 07:03:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.844 07:03:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.844 07:03:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.844 07:03:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.844 07:03:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.844 07:03:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.844 07:03:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.844 07:03:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:04.844 07:03:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.844 07:03:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.844 07:03:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.844 07:03:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.844 07:03:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.844 07:03:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.104 07:03:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.104 [2024-11-20 07:03:27.277616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.104 [2024-11-20 07:03:27.306931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.104 [2024-11-20 07:03:27.306932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.104 [2024-11-20 07:03:27.336548] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.104 [2024-11-20 07:03:27.336578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.398 07:03:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3286060 /var/tmp/spdk-nbd.sock 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3286060 ']' 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:08.398 07:03:30 event.app_repeat -- event/event.sh@39 -- # killprocess 3286060 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 3286060 ']' 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 3286060 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3286060 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3286060' 00:05:08.398 killing process with pid 3286060 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@971 -- # kill 3286060 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@976 -- # wait 3286060 00:05:08.398 spdk_app_start is called in Round 0. 00:05:08.398 Shutdown signal received, stop current app iteration 00:05:08.398 Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 reinitialization... 00:05:08.398 spdk_app_start is called in Round 1. 00:05:08.398 Shutdown signal received, stop current app iteration 00:05:08.398 Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 reinitialization... 00:05:08.398 spdk_app_start is called in Round 2. 00:05:08.398 Shutdown signal received, stop current app iteration 00:05:08.398 Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 reinitialization... 00:05:08.398 spdk_app_start is called in Round 3. 00:05:08.398 Shutdown signal received, stop current app iteration 00:05:08.398 07:03:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:08.398 07:03:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:08.398 00:05:08.398 real 0m15.746s 00:05:08.398 user 0m34.428s 00:05:08.398 sys 0m2.294s 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.398 07:03:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.398 ************************************ 00:05:08.398 END TEST app_repeat 00:05:08.398 ************************************ 00:05:08.398 07:03:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:08.398 07:03:30 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:08.398 07:03:30 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:08.398 07:03:30 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.398 07:03:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.398 ************************************ 00:05:08.398 START TEST cpu_locks 00:05:08.398 ************************************ 00:05:08.398 07:03:30 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:08.658 * Looking for test storage... 00:05:08.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:08.658 07:03:30 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:08.658 07:03:30 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:08.658 07:03:30 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:08.658 07:03:30 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.659 07:03:30 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:08.659 07:03:30 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.659 07:03:30 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:08.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.659 --rc genhtml_branch_coverage=1 00:05:08.659 --rc genhtml_function_coverage=1 00:05:08.659 --rc genhtml_legend=1 00:05:08.659 --rc geninfo_all_blocks=1 00:05:08.659 --rc geninfo_unexecuted_blocks=1 00:05:08.659 00:05:08.659 ' 00:05:08.659 07:03:30 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:08.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.659 --rc genhtml_branch_coverage=1 00:05:08.659 --rc genhtml_function_coverage=1 00:05:08.659 --rc genhtml_legend=1 00:05:08.659 --rc geninfo_all_blocks=1 00:05:08.659 --rc geninfo_unexecuted_blocks=1 00:05:08.659 00:05:08.659 ' 00:05:08.659 07:03:30 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:08.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.659 --rc genhtml_branch_coverage=1 00:05:08.659 --rc genhtml_function_coverage=1 00:05:08.659 --rc genhtml_legend=1 00:05:08.659 --rc geninfo_all_blocks=1 00:05:08.659 --rc geninfo_unexecuted_blocks=1 00:05:08.659 00:05:08.659 ' 00:05:08.659 07:03:30 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:08.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.659 --rc genhtml_branch_coverage=1 00:05:08.659 --rc genhtml_function_coverage=1 00:05:08.659 --rc genhtml_legend=1 00:05:08.659 --rc geninfo_all_blocks=1 00:05:08.659 --rc geninfo_unexecuted_blocks=1 00:05:08.659 00:05:08.659 ' 00:05:08.659 07:03:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:08.659 07:03:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:08.659 07:03:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:08.659 07:03:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:08.659 07:03:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:08.659 07:03:30 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.659 07:03:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.659 ************************************ 00:05:08.659 START TEST default_locks 00:05:08.659 ************************************ 00:05:08.659 07:03:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:08.659 07:03:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3289631 00:05:08.659 07:03:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3289631 00:05:08.659 07:03:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.659 07:03:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3289631 ']' 00:05:08.659 07:03:30 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.659 07:03:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:08.659 07:03:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.659 07:03:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:08.659 07:03:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.659 [2024-11-20 07:03:30.915459] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:08.659 [2024-11-20 07:03:30.915520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289631 ] 00:05:08.918 [2024-11-20 07:03:31.001717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.918 [2024-11-20 07:03:31.036605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.487 07:03:31 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:09.487 07:03:31 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:09.487 07:03:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3289631 00:05:09.487 07:03:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3289631 00:05:09.487 07:03:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.055 lslocks: write error 00:05:10.055 07:03:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3289631 00:05:10.055 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 3289631 ']' 00:05:10.055 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 3289631 00:05:10.055 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:10.055 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:10.055 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3289631 00:05:10.055 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:10.055 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:10.055 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3289631' 00:05:10.055 killing process with pid 3289631 00:05:10.055 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 3289631 00:05:10.055 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 3289631 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3289631 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3289631 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3289631 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3289631 ']' 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3289631) - No such process 00:05:10.315 ERROR: process (pid: 3289631) is no longer running 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:10.315 00:05:10.315 real 0m1.584s 00:05:10.315 user 0m1.686s 00:05:10.315 sys 0m0.574s 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:10.315 07:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.315 ************************************ 00:05:10.315 END TEST default_locks 00:05:10.315 ************************************ 00:05:10.315 07:03:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:10.315 07:03:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:10.315 07:03:32 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:10.315 07:03:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.316 ************************************ 00:05:10.316 START TEST default_locks_via_rpc 00:05:10.316 ************************************ 00:05:10.316 07:03:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:10.316 07:03:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3289999 00:05:10.316 07:03:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3289999 00:05:10.316 07:03:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.316 07:03:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3289999 ']' 00:05:10.316 07:03:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.316 07:03:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:10.316 07:03:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.316 07:03:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:10.316 07:03:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.316 [2024-11-20 07:03:32.571327] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:10.316 [2024-11-20 07:03:32.571389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289999 ] 00:05:10.575 [2024-11-20 07:03:32.657622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.575 [2024-11-20 07:03:32.690965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.143 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:11.143 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:11.143 07:03:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:11.144 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.144 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.144 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.144 07:03:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:11.144 07:03:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.144 07:03:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.144 07:03:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.144 07:03:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:11.144 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.144 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.144 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.144 07:03:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3289999 00:05:11.144 07:03:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3289999 00:05:11.144 07:03:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.713 07:03:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3289999 00:05:11.713 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 3289999 ']' 00:05:11.713 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 3289999 00:05:11.713 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:11.713 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:11.713 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3289999 00:05:11.713 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:11.713 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:11.713 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3289999' 00:05:11.713 killing process with pid 3289999 00:05:11.713 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 3289999 00:05:11.713 07:03:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 3289999 00:05:11.973 00:05:11.973 real 0m1.572s 00:05:11.973 user 0m1.695s 00:05:11.973 sys 0m0.546s 00:05:11.973 07:03:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:11.973 07:03:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.973 ************************************ 00:05:11.973 END TEST default_locks_via_rpc 00:05:11.973 ************************************ 00:05:11.973 07:03:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:11.973 07:03:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:11.973 07:03:34 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.973 07:03:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.973 ************************************ 00:05:11.973 START TEST non_locking_app_on_locked_coremask 00:05:11.973 ************************************ 00:05:11.973 07:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:11.973 07:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3290365 00:05:11.973 07:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3290365 /var/tmp/spdk.sock 00:05:11.973 07:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.973 07:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3290365 ']' 00:05:11.973 07:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.973 07:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:11.973 07:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.973 07:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:11.973 07:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.973 [2024-11-20 07:03:34.218957] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:11.973 [2024-11-20 07:03:34.219010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290365 ] 00:05:12.233 [2024-11-20 07:03:34.300885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.233 [2024-11-20 07:03:34.331633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.803 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:12.803 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:12.803 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3290392 00:05:12.803 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3290392 /var/tmp/spdk2.sock 00:05:12.803 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3290392 ']' 00:05:12.803 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:12.803 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.803 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:12.803 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.803 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:12.803 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.803 [2024-11-20 07:03:35.069559] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:12.803 [2024-11-20 07:03:35.069612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290392 ] 00:05:13.062 [2024-11-20 07:03:35.156204] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.063 [2024-11-20 07:03:35.156234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.063 [2024-11-20 07:03:35.218604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.633 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:13.633 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:13.633 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3290365 00:05:13.633 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3290365 00:05:13.633 07:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.201 lslocks: write error 00:05:14.201 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3290365 00:05:14.201 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3290365 ']' 00:05:14.201 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3290365 00:05:14.201 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:14.201 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:14.201 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3290365 00:05:14.460 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:14.460 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:14.460 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3290365' 00:05:14.460 killing process with pid 3290365 00:05:14.460 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3290365 00:05:14.460 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3290365 00:05:14.720 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3290392 00:05:14.720 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3290392 ']' 00:05:14.720 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3290392 00:05:14.720 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:14.720 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:14.720 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3290392 00:05:14.720 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:14.720 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:14.720 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3290392' 00:05:14.720 killing process with pid 3290392 00:05:14.720 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3290392 00:05:14.720 07:03:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3290392 00:05:15.014 00:05:15.014 real 0m2.973s 00:05:15.014 user 0m3.333s 00:05:15.014 sys 0m0.894s 00:05:15.014 07:03:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.014 07:03:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.014 ************************************ 00:05:15.014 END TEST non_locking_app_on_locked_coremask 00:05:15.014 ************************************ 00:05:15.014 07:03:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:15.014 07:03:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.014 07:03:37 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.014 07:03:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.014 ************************************ 00:05:15.014 START TEST locking_app_on_unlocked_coremask 00:05:15.014 ************************************ 00:05:15.014 07:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:15.014 07:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3291005 00:05:15.014 07:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3291005 /var/tmp/spdk.sock 00:05:15.014 07:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:15.014 07:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3291005 ']' 00:05:15.014 07:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.014 07:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.014 07:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.014 07:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.014 07:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.014 [2024-11-20 07:03:37.269443] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:15.014 [2024-11-20 07:03:37.269496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291005 ] 00:05:15.274 [2024-11-20 07:03:37.355708] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.274 [2024-11-20 07:03:37.355732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.274 [2024-11-20 07:03:37.388702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.843 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.843 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:15.843 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:15.843 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3291092 00:05:15.843 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3291092 /var/tmp/spdk2.sock 00:05:15.843 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3291092 ']' 00:05:15.843 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.843 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.843 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.843 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.843 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.843 [2024-11-20 07:03:38.090836] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:15.843 [2024-11-20 07:03:38.090887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291092 ] 00:05:16.101 [2024-11-20 07:03:38.178708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.101 [2024-11-20 07:03:38.240817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.671 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:16.671 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:16.671 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3291092 00:05:16.671 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3291092 00:05:16.671 07:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.741 lslocks: write error 00:05:17.741 07:03:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3291005 00:05:17.741 07:03:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3291005 ']' 00:05:17.741 07:03:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3291005 00:05:17.741 07:03:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:17.741 07:03:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:17.741 07:03:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3291005 00:05:17.741 07:03:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:17.741 07:03:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:17.741 07:03:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3291005' 00:05:17.741 killing process with pid 3291005 00:05:17.741 07:03:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3291005 00:05:17.741 07:03:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3291005 00:05:17.999 07:03:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3291092 00:05:17.999 07:03:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3291092 ']' 00:05:17.999 07:03:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3291092 00:05:17.999 07:03:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:17.999 07:03:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:18.259 07:03:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3291092 00:05:18.259 07:03:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:18.259 07:03:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:18.259 07:03:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3291092' 00:05:18.259 killing process with pid 3291092 00:05:18.259 07:03:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3291092 00:05:18.259 07:03:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3291092 00:05:18.259 00:05:18.259 real 0m3.310s 00:05:18.259 user 0m3.656s 00:05:18.259 sys 0m1.034s 00:05:18.259 07:03:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.259 07:03:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.259 ************************************ 00:05:18.259 END TEST locking_app_on_unlocked_coremask 00:05:18.259 ************************************ 00:05:18.519 07:03:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:18.519 07:03:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.519 07:03:40 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.519 07:03:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.519 ************************************ 00:05:18.519 START TEST locking_app_on_locked_coremask 00:05:18.519 ************************************ 00:05:18.519 07:03:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:18.519 07:03:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3291725 00:05:18.519 07:03:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3291725 /var/tmp/spdk.sock 00:05:18.520 07:03:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.520 07:03:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3291725 ']' 00:05:18.520 07:03:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.520 07:03:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:18.520 07:03:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.520 07:03:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:18.520 07:03:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.520 [2024-11-20 07:03:40.654922] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:18.520 [2024-11-20 07:03:40.654979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291725 ] 00:05:18.520 [2024-11-20 07:03:40.744280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.520 [2024-11-20 07:03:40.784102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3291806 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3291806 /var/tmp/spdk2.sock 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3291806 /var/tmp/spdk2.sock 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3291806 /var/tmp/spdk2.sock 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3291806 ']' 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:19.458 07:03:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.458 [2024-11-20 07:03:41.505131] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:19.458 [2024-11-20 07:03:41.505190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291806 ] 00:05:19.458 [2024-11-20 07:03:41.595113] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3291725 has claimed it. 00:05:19.458 [2024-11-20 07:03:41.595145] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:20.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3291806) - No such process 00:05:20.027 ERROR: process (pid: 3291806) is no longer running 00:05:20.027 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:20.027 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:20.027 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:20.027 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:20.027 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:20.027 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:20.027 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3291725 00:05:20.027 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3291725 00:05:20.027 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.596 lslocks: write error 00:05:20.596 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3291725 00:05:20.596 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3291725 ']' 00:05:20.596 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3291725 00:05:20.596 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:20.596 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:20.596 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3291725 00:05:20.596 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:20.596 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:20.596 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3291725' 00:05:20.596 killing process with pid 3291725 00:05:20.596 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3291725 00:05:20.596 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3291725 00:05:20.857 00:05:20.857 real 0m2.322s 00:05:20.857 user 0m2.614s 00:05:20.857 sys 0m0.659s 00:05:20.857 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.857 07:03:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.857 ************************************ 00:05:20.857 END TEST locking_app_on_locked_coremask 00:05:20.857 ************************************ 00:05:20.857 07:03:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:20.857 07:03:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:20.857 07:03:42 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.857 07:03:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.857 ************************************ 00:05:20.857 START TEST locking_overlapped_coremask 00:05:20.857 ************************************ 00:05:20.857 07:03:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:20.857 07:03:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3292173 00:05:20.857 07:03:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3292173 /var/tmp/spdk.sock 00:05:20.857 07:03:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:20.857 07:03:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3292173 ']' 00:05:20.857 07:03:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.857 07:03:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:20.857 07:03:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.857 07:03:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:20.857 07:03:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.857 [2024-11-20 07:03:43.053982] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:20.857 [2024-11-20 07:03:43.054032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292173 ] 00:05:21.118 [2024-11-20 07:03:43.136037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:21.118 [2024-11-20 07:03:43.168630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.118 [2024-11-20 07:03:43.168780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.118 [2024-11-20 07:03:43.168781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3292365 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3292365 /var/tmp/spdk2.sock 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3292365 /var/tmp/spdk2.sock 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3292365 /var/tmp/spdk2.sock 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3292365 ']' 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:21.690 07:03:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.690 [2024-11-20 07:03:43.915758] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:21.690 [2024-11-20 07:03:43.915813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292365 ] 00:05:21.960 [2024-11-20 07:03:44.028456] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3292173 has claimed it. 00:05:21.960 [2024-11-20 07:03:44.028497] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:22.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3292365) - No such process 00:05:22.529 ERROR: process (pid: 3292365) is no longer running 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3292173 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 3292173 ']' 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 3292173 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3292173 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3292173' 00:05:22.529 killing process with pid 3292173 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 3292173 00:05:22.529 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 3292173 00:05:22.529 00:05:22.530 real 0m1.781s 00:05:22.530 user 0m5.187s 00:05:22.530 sys 0m0.379s 00:05:22.530 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.530 07:03:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.530 ************************************ 00:05:22.530 END TEST locking_overlapped_coremask 00:05:22.530 ************************************ 00:05:22.790 07:03:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:22.790 07:03:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:22.790 07:03:44 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.790 07:03:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.790 ************************************ 00:05:22.790 START TEST locking_overlapped_coremask_via_rpc 00:05:22.790 ************************************ 00:05:22.790 07:03:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:22.790 07:03:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3292547 00:05:22.790 07:03:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3292547 /var/tmp/spdk.sock 00:05:22.790 07:03:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:22.790 07:03:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3292547 ']' 00:05:22.790 07:03:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.790 07:03:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:22.790 07:03:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.790 07:03:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:22.790 07:03:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.790 [2024-11-20 07:03:44.909030] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:22.790 [2024-11-20 07:03:44.909081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292547 ] 00:05:22.790 [2024-11-20 07:03:44.996012] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.790 [2024-11-20 07:03:44.996033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:22.790 [2024-11-20 07:03:45.028026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.790 [2024-11-20 07:03:45.028195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.790 [2024-11-20 07:03:45.028203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.729 07:03:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:23.729 07:03:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:23.729 07:03:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3292807 00:05:23.729 07:03:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3292807 /var/tmp/spdk2.sock 00:05:23.729 07:03:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3292807 ']' 00:05:23.729 07:03:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:23.729 07:03:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.729 07:03:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.729 07:03:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.729 07:03:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.729 07:03:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.729 [2024-11-20 07:03:45.740576] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:23.729 [2024-11-20 07:03:45.740630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292807 ] 00:05:23.729 [2024-11-20 07:03:45.858205] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.729 [2024-11-20 07:03:45.858234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.729 [2024-11-20 07:03:45.931768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.729 [2024-11-20 07:03:45.932216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.729 [2024-11-20 07:03:45.932216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.296 [2024-11-20 07:03:46.541240] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3292547 has claimed it. 00:05:24.296 request: 00:05:24.296 { 00:05:24.296 "method": "framework_enable_cpumask_locks", 00:05:24.296 "req_id": 1 00:05:24.296 } 00:05:24.296 Got JSON-RPC error response 00:05:24.296 response: 00:05:24.296 { 00:05:24.296 "code": -32603, 00:05:24.296 "message": "Failed to claim CPU core: 2" 00:05:24.296 } 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3292547 /var/tmp/spdk.sock 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3292547 ']' 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:24.296 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.556 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.556 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:24.556 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3292807 /var/tmp/spdk2.sock 00:05:24.556 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3292807 ']' 00:05:24.556 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.556 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:24.556 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.556 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:24.556 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.815 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.815 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:24.816 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:24.816 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:24.816 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:24.816 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:24.816 00:05:24.816 real 0m2.064s 00:05:24.816 user 0m0.845s 00:05:24.816 sys 0m0.147s 00:05:24.816 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:24.816 07:03:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.816 ************************************ 00:05:24.816 END TEST locking_overlapped_coremask_via_rpc 00:05:24.816 ************************************ 00:05:24.816 07:03:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:24.816 07:03:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3292547 ]] 00:05:24.816 07:03:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3292547 00:05:24.816 07:03:46 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3292547 ']' 00:05:24.816 07:03:46 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3292547 00:05:24.816 07:03:46 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:24.816 07:03:46 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:24.816 07:03:46 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3292547 00:05:24.816 07:03:47 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:24.816 07:03:47 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:24.816 07:03:47 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3292547' 00:05:24.816 killing process with pid 3292547 00:05:24.816 07:03:47 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3292547 00:05:24.816 07:03:47 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3292547 00:05:25.074 07:03:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3292807 ]] 00:05:25.074 07:03:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3292807 00:05:25.074 07:03:47 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3292807 ']' 00:05:25.074 07:03:47 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3292807 00:05:25.074 07:03:47 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:25.074 07:03:47 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:25.074 07:03:47 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3292807 00:05:25.074 07:03:47 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:25.074 07:03:47 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:25.074 07:03:47 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3292807' 00:05:25.074 killing process with pid 3292807 00:05:25.074 07:03:47 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3292807 00:05:25.074 07:03:47 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3292807 00:05:25.333 07:03:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:25.333 07:03:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:25.333 07:03:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3292547 ]] 00:05:25.333 07:03:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3292547 00:05:25.333 07:03:47 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3292547 ']' 00:05:25.333 07:03:47 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3292547 00:05:25.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3292547) - No such process 00:05:25.333 07:03:47 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3292547 is not found' 00:05:25.333 Process with pid 3292547 is not found 00:05:25.333 07:03:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3292807 ]] 00:05:25.333 07:03:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3292807 00:05:25.333 07:03:47 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3292807 ']' 00:05:25.333 07:03:47 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3292807 00:05:25.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3292807) - No such process 00:05:25.333 07:03:47 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3292807 is not found' 00:05:25.333 Process with pid 3292807 is not found 00:05:25.333 07:03:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:25.333 00:05:25.333 real 0m16.874s 00:05:25.333 user 0m28.981s 00:05:25.333 sys 0m5.209s 00:05:25.333 07:03:47 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.333 07:03:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.333 ************************************ 00:05:25.333 END TEST cpu_locks 00:05:25.333 ************************************ 00:05:25.333 00:05:25.333 real 0m42.790s 00:05:25.333 user 1m23.833s 00:05:25.333 sys 0m8.609s 00:05:25.333 07:03:47 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.333 07:03:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.333 ************************************ 00:05:25.333 END TEST event 00:05:25.333 ************************************ 00:05:25.333 07:03:47 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:25.333 07:03:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.333 07:03:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.333 07:03:47 -- common/autotest_common.sh@10 -- # set +x 00:05:25.333 ************************************ 00:05:25.333 START TEST thread 00:05:25.333 ************************************ 00:05:25.333 07:03:47 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:25.595 * Looking for test storage... 00:05:25.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:25.595 07:03:47 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:25.595 07:03:47 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:25.595 07:03:47 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:25.595 07:03:47 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:25.595 07:03:47 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.595 07:03:47 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.595 07:03:47 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.595 07:03:47 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.595 07:03:47 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.595 07:03:47 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.595 07:03:47 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.595 07:03:47 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.595 07:03:47 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.595 07:03:47 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.595 07:03:47 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.595 07:03:47 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:25.595 07:03:47 thread -- scripts/common.sh@345 -- # : 1 00:05:25.595 07:03:47 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.595 07:03:47 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.595 07:03:47 thread -- scripts/common.sh@365 -- # decimal 1 00:05:25.595 07:03:47 thread -- scripts/common.sh@353 -- # local d=1 00:05:25.595 07:03:47 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.595 07:03:47 thread -- scripts/common.sh@355 -- # echo 1 00:05:25.595 07:03:47 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.595 07:03:47 thread -- scripts/common.sh@366 -- # decimal 2 00:05:25.595 07:03:47 thread -- scripts/common.sh@353 -- # local d=2 00:05:25.595 07:03:47 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.595 07:03:47 thread -- scripts/common.sh@355 -- # echo 2 00:05:25.595 07:03:47 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.595 07:03:47 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.595 07:03:47 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.595 07:03:47 thread -- scripts/common.sh@368 -- # return 0 00:05:25.595 07:03:47 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.595 07:03:47 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:25.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.595 --rc genhtml_branch_coverage=1 00:05:25.595 --rc genhtml_function_coverage=1 00:05:25.595 --rc genhtml_legend=1 00:05:25.595 --rc geninfo_all_blocks=1 00:05:25.595 --rc geninfo_unexecuted_blocks=1 00:05:25.595 00:05:25.595 ' 00:05:25.595 07:03:47 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:25.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.595 --rc genhtml_branch_coverage=1 00:05:25.595 --rc genhtml_function_coverage=1 00:05:25.595 --rc genhtml_legend=1 00:05:25.595 --rc geninfo_all_blocks=1 00:05:25.595 --rc geninfo_unexecuted_blocks=1 00:05:25.595 00:05:25.595 ' 00:05:25.595 07:03:47 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:25.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.595 --rc genhtml_branch_coverage=1 00:05:25.595 --rc genhtml_function_coverage=1 00:05:25.595 --rc genhtml_legend=1 00:05:25.595 --rc geninfo_all_blocks=1 00:05:25.595 --rc geninfo_unexecuted_blocks=1 00:05:25.595 00:05:25.595 ' 00:05:25.595 07:03:47 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:25.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.595 --rc genhtml_branch_coverage=1 00:05:25.595 --rc genhtml_function_coverage=1 00:05:25.595 --rc genhtml_legend=1 00:05:25.595 --rc geninfo_all_blocks=1 00:05:25.595 --rc geninfo_unexecuted_blocks=1 00:05:25.595 00:05:25.595 ' 00:05:25.595 07:03:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:25.595 07:03:47 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:25.595 07:03:47 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.595 07:03:47 thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.595 ************************************ 00:05:25.595 START TEST thread_poller_perf 00:05:25.595 ************************************ 00:05:25.595 07:03:47 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:25.855 [2024-11-20 07:03:47.871179] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:25.855 [2024-11-20 07:03:47.871294] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293331 ] 00:05:25.855 [2024-11-20 07:03:47.959259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.855 [2024-11-20 07:03:47.999996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.855 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:26.795 [2024-11-20T06:03:49.073Z] ====================================== 00:05:26.795 [2024-11-20T06:03:49.073Z] busy:2405506714 (cyc) 00:05:26.795 [2024-11-20T06:03:49.073Z] total_run_count: 417000 00:05:26.795 [2024-11-20T06:03:49.073Z] tsc_hz: 2400000000 (cyc) 00:05:26.795 [2024-11-20T06:03:49.073Z] ====================================== 00:05:26.795 [2024-11-20T06:03:49.073Z] poller_cost: 5768 (cyc), 2403 (nsec) 00:05:26.795 00:05:26.795 real 0m1.184s 00:05:26.795 user 0m1.096s 00:05:26.795 sys 0m0.084s 00:05:26.795 07:03:49 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.795 07:03:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.795 ************************************ 00:05:26.795 END TEST thread_poller_perf 00:05:26.795 ************************************ 00:05:27.056 07:03:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:27.056 07:03:49 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:27.056 07:03:49 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:27.056 07:03:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.056 ************************************ 00:05:27.056 START TEST thread_poller_perf 00:05:27.056 ************************************ 00:05:27.056 07:03:49 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:27.056 [2024-11-20 07:03:49.134598] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:27.056 [2024-11-20 07:03:49.134704] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293630 ] 00:05:27.056 [2024-11-20 07:03:49.219757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.056 [2024-11-20 07:03:49.252220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.056 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:28.443 [2024-11-20T06:03:50.721Z] ====================================== 00:05:28.443 [2024-11-20T06:03:50.721Z] busy:2401662046 (cyc) 00:05:28.443 [2024-11-20T06:03:50.721Z] total_run_count: 5565000 00:05:28.443 [2024-11-20T06:03:50.721Z] tsc_hz: 2400000000 (cyc) 00:05:28.443 [2024-11-20T06:03:50.721Z] ====================================== 00:05:28.443 [2024-11-20T06:03:50.721Z] poller_cost: 431 (cyc), 179 (nsec) 00:05:28.443 00:05:28.443 real 0m1.167s 00:05:28.443 user 0m1.088s 00:05:28.443 sys 0m0.076s 00:05:28.443 07:03:50 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.443 07:03:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.443 ************************************ 00:05:28.443 END TEST thread_poller_perf 00:05:28.443 ************************************ 00:05:28.443 07:03:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:28.443 00:05:28.443 real 0m2.713s 00:05:28.443 user 0m2.359s 00:05:28.443 sys 0m0.368s 00:05:28.443 07:03:50 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.443 07:03:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.443 ************************************ 00:05:28.443 END TEST thread 00:05:28.443 ************************************ 00:05:28.443 07:03:50 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:28.443 07:03:50 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:28.443 07:03:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.443 07:03:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.443 07:03:50 -- common/autotest_common.sh@10 -- # set +x 00:05:28.443 ************************************ 00:05:28.443 START TEST app_cmdline 00:05:28.443 ************************************ 00:05:28.443 07:03:50 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:28.443 * Looking for test storage... 00:05:28.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:28.443 07:03:50 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:28.443 07:03:50 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:28.443 07:03:50 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:28.443 07:03:50 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.443 07:03:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:28.443 07:03:50 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.443 07:03:50 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:28.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.443 --rc genhtml_branch_coverage=1 00:05:28.443 --rc genhtml_function_coverage=1 00:05:28.444 --rc genhtml_legend=1 00:05:28.444 --rc geninfo_all_blocks=1 00:05:28.444 --rc geninfo_unexecuted_blocks=1 00:05:28.444 00:05:28.444 ' 00:05:28.444 07:03:50 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:28.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.444 --rc genhtml_branch_coverage=1 00:05:28.444 --rc genhtml_function_coverage=1 00:05:28.444 --rc genhtml_legend=1 00:05:28.444 --rc geninfo_all_blocks=1 00:05:28.444 --rc geninfo_unexecuted_blocks=1 00:05:28.444 00:05:28.444 ' 00:05:28.444 07:03:50 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:28.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.444 --rc genhtml_branch_coverage=1 00:05:28.444 --rc genhtml_function_coverage=1 00:05:28.444 --rc genhtml_legend=1 00:05:28.444 --rc geninfo_all_blocks=1 00:05:28.444 --rc geninfo_unexecuted_blocks=1 00:05:28.444 00:05:28.444 ' 00:05:28.444 07:03:50 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:28.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.444 --rc genhtml_branch_coverage=1 00:05:28.444 --rc genhtml_function_coverage=1 00:05:28.444 --rc genhtml_legend=1 00:05:28.444 --rc geninfo_all_blocks=1 00:05:28.444 --rc geninfo_unexecuted_blocks=1 00:05:28.444 00:05:28.444 ' 00:05:28.444 07:03:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:28.444 07:03:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3293918 00:05:28.444 07:03:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3293918 00:05:28.444 07:03:50 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 3293918 ']' 00:05:28.444 07:03:50 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:28.444 07:03:50 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.444 07:03:50 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.444 07:03:50 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.444 07:03:50 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.444 07:03:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:28.444 [2024-11-20 07:03:50.661149] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:28.444 [2024-11-20 07:03:50.661232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293918 ] 00:05:28.704 [2024-11-20 07:03:50.751470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.704 [2024-11-20 07:03:50.792467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.273 07:03:51 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.273 07:03:51 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:29.273 07:03:51 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:29.535 { 00:05:29.535 "version": "SPDK v25.01-pre git sha1 9b64b1304", 00:05:29.535 "fields": { 00:05:29.535 "major": 25, 00:05:29.535 "minor": 1, 00:05:29.535 "patch": 0, 00:05:29.535 "suffix": "-pre", 00:05:29.535 "commit": "9b64b1304" 00:05:29.535 } 00:05:29.535 } 00:05:29.535 07:03:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:29.535 07:03:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:29.535 07:03:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:29.535 07:03:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:29.535 07:03:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:29.535 07:03:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:29.535 07:03:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:29.535 07:03:51 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.535 07:03:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:29.535 07:03:51 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.535 07:03:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:29.535 07:03:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:29.535 07:03:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:29.535 07:03:51 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:29.535 07:03:51 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:29.535 07:03:51 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.535 07:03:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.535 07:03:51 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.535 07:03:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.535 07:03:51 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.535 07:03:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.535 07:03:51 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.535 07:03:51 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:29.535 07:03:51 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:29.796 request: 00:05:29.796 { 00:05:29.796 "method": "env_dpdk_get_mem_stats", 00:05:29.796 "req_id": 1 00:05:29.796 } 00:05:29.796 Got JSON-RPC error response 00:05:29.796 response: 00:05:29.796 { 00:05:29.796 "code": -32601, 00:05:29.796 "message": "Method not found" 00:05:29.796 } 00:05:29.796 07:03:51 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:29.796 07:03:51 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:29.796 07:03:51 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:29.796 07:03:51 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:29.796 07:03:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3293918 00:05:29.796 07:03:51 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 3293918 ']' 00:05:29.796 07:03:51 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 3293918 00:05:29.796 07:03:51 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:29.796 07:03:51 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:29.796 07:03:51 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3293918 00:05:29.796 07:03:51 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:29.796 07:03:51 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:29.796 07:03:51 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3293918' 00:05:29.796 killing process with pid 3293918 00:05:29.796 07:03:51 app_cmdline -- common/autotest_common.sh@971 -- # kill 3293918 00:05:29.796 07:03:51 app_cmdline -- common/autotest_common.sh@976 -- # wait 3293918 00:05:30.057 00:05:30.057 real 0m1.732s 00:05:30.057 user 0m2.084s 00:05:30.057 sys 0m0.469s 00:05:30.057 07:03:52 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:30.057 07:03:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:30.057 ************************************ 00:05:30.057 END TEST app_cmdline 00:05:30.057 ************************************ 00:05:30.057 07:03:52 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:30.057 07:03:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:30.057 07:03:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.057 07:03:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.057 ************************************ 00:05:30.057 START TEST version 00:05:30.057 ************************************ 00:05:30.057 07:03:52 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:30.057 * Looking for test storage... 00:05:30.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:30.057 07:03:52 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:30.057 07:03:52 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:30.057 07:03:52 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:30.319 07:03:52 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:30.319 07:03:52 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.319 07:03:52 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.319 07:03:52 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.319 07:03:52 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.319 07:03:52 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.319 07:03:52 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.319 07:03:52 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.319 07:03:52 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.319 07:03:52 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.319 07:03:52 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.319 07:03:52 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.319 07:03:52 version -- scripts/common.sh@344 -- # case "$op" in 00:05:30.319 07:03:52 version -- scripts/common.sh@345 -- # : 1 00:05:30.319 07:03:52 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.319 07:03:52 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.319 07:03:52 version -- scripts/common.sh@365 -- # decimal 1 00:05:30.319 07:03:52 version -- scripts/common.sh@353 -- # local d=1 00:05:30.319 07:03:52 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.319 07:03:52 version -- scripts/common.sh@355 -- # echo 1 00:05:30.319 07:03:52 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.319 07:03:52 version -- scripts/common.sh@366 -- # decimal 2 00:05:30.319 07:03:52 version -- scripts/common.sh@353 -- # local d=2 00:05:30.319 07:03:52 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.319 07:03:52 version -- scripts/common.sh@355 -- # echo 2 00:05:30.319 07:03:52 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.319 07:03:52 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.319 07:03:52 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.319 07:03:52 version -- scripts/common.sh@368 -- # return 0 00:05:30.319 07:03:52 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.319 07:03:52 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:30.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.319 --rc genhtml_branch_coverage=1 00:05:30.319 --rc genhtml_function_coverage=1 00:05:30.319 --rc genhtml_legend=1 00:05:30.319 --rc geninfo_all_blocks=1 00:05:30.319 --rc geninfo_unexecuted_blocks=1 00:05:30.319 00:05:30.319 ' 00:05:30.319 07:03:52 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:30.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.319 --rc genhtml_branch_coverage=1 00:05:30.319 --rc genhtml_function_coverage=1 00:05:30.319 --rc genhtml_legend=1 00:05:30.319 --rc geninfo_all_blocks=1 00:05:30.319 --rc geninfo_unexecuted_blocks=1 00:05:30.319 00:05:30.319 ' 00:05:30.319 07:03:52 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:30.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.319 --rc genhtml_branch_coverage=1 00:05:30.319 --rc genhtml_function_coverage=1 00:05:30.319 --rc genhtml_legend=1 00:05:30.319 --rc geninfo_all_blocks=1 00:05:30.319 --rc geninfo_unexecuted_blocks=1 00:05:30.319 00:05:30.319 ' 00:05:30.319 07:03:52 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:30.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.319 --rc genhtml_branch_coverage=1 00:05:30.319 --rc genhtml_function_coverage=1 00:05:30.319 --rc genhtml_legend=1 00:05:30.319 --rc geninfo_all_blocks=1 00:05:30.319 --rc geninfo_unexecuted_blocks=1 00:05:30.319 00:05:30.319 ' 00:05:30.319 07:03:52 version -- app/version.sh@17 -- # get_header_version major 00:05:30.319 07:03:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.319 07:03:52 version -- app/version.sh@14 -- # cut -f2 00:05:30.319 07:03:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.319 07:03:52 version -- app/version.sh@17 -- # major=25 00:05:30.319 07:03:52 version -- app/version.sh@18 -- # get_header_version minor 00:05:30.319 07:03:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.319 07:03:52 version -- app/version.sh@14 -- # cut -f2 00:05:30.319 07:03:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.319 07:03:52 version -- app/version.sh@18 -- # minor=1 00:05:30.319 07:03:52 version -- app/version.sh@19 -- # get_header_version patch 00:05:30.319 07:03:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.319 07:03:52 version -- app/version.sh@14 -- # cut -f2 00:05:30.319 07:03:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.319 07:03:52 version -- app/version.sh@19 -- # patch=0 00:05:30.319 07:03:52 version -- app/version.sh@20 -- # get_header_version suffix 00:05:30.319 07:03:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.319 07:03:52 version -- app/version.sh@14 -- # cut -f2 00:05:30.319 07:03:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.319 07:03:52 version -- app/version.sh@20 -- # suffix=-pre 00:05:30.319 07:03:52 version -- app/version.sh@22 -- # version=25.1 00:05:30.319 07:03:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:30.319 07:03:52 version -- app/version.sh@28 -- # version=25.1rc0 00:05:30.319 07:03:52 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:30.319 07:03:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:30.319 07:03:52 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:30.319 07:03:52 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:30.319 00:05:30.319 real 0m0.287s 00:05:30.319 user 0m0.170s 00:05:30.319 sys 0m0.168s 00:05:30.319 07:03:52 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:30.319 07:03:52 version -- common/autotest_common.sh@10 -- # set +x 00:05:30.319 ************************************ 00:05:30.319 END TEST version 00:05:30.319 ************************************ 00:05:30.319 07:03:52 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:30.319 07:03:52 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:30.319 07:03:52 -- spdk/autotest.sh@194 -- # uname -s 00:05:30.319 07:03:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:30.319 07:03:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:30.319 07:03:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:30.319 07:03:52 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:30.319 07:03:52 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:30.319 07:03:52 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:30.319 07:03:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.319 07:03:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.319 07:03:52 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:30.319 07:03:52 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:30.319 07:03:52 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:30.319 07:03:52 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:30.319 07:03:52 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:30.319 07:03:52 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:30.319 07:03:52 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:30.319 07:03:52 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:30.319 07:03:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.319 07:03:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.580 ************************************ 00:05:30.580 START TEST nvmf_tcp 00:05:30.580 ************************************ 00:05:30.581 07:03:52 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:30.581 * Looking for test storage... 00:05:30.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:30.581 07:03:52 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:30.581 07:03:52 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:30.581 07:03:52 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:30.581 07:03:52 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.581 07:03:52 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:30.581 07:03:52 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.581 07:03:52 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:30.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.581 --rc genhtml_branch_coverage=1 00:05:30.581 --rc genhtml_function_coverage=1 00:05:30.581 --rc genhtml_legend=1 00:05:30.581 --rc geninfo_all_blocks=1 00:05:30.581 --rc geninfo_unexecuted_blocks=1 00:05:30.581 00:05:30.581 ' 00:05:30.581 07:03:52 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:30.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.581 --rc genhtml_branch_coverage=1 00:05:30.581 --rc genhtml_function_coverage=1 00:05:30.581 --rc genhtml_legend=1 00:05:30.581 --rc geninfo_all_blocks=1 00:05:30.581 --rc geninfo_unexecuted_blocks=1 00:05:30.581 00:05:30.581 ' 00:05:30.581 07:03:52 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:30.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.581 --rc genhtml_branch_coverage=1 00:05:30.581 --rc genhtml_function_coverage=1 00:05:30.581 --rc genhtml_legend=1 00:05:30.581 --rc geninfo_all_blocks=1 00:05:30.581 --rc geninfo_unexecuted_blocks=1 00:05:30.581 00:05:30.581 ' 00:05:30.581 07:03:52 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:30.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.581 --rc genhtml_branch_coverage=1 00:05:30.581 --rc genhtml_function_coverage=1 00:05:30.581 --rc genhtml_legend=1 00:05:30.581 --rc geninfo_all_blocks=1 00:05:30.581 --rc geninfo_unexecuted_blocks=1 00:05:30.581 00:05:30.581 ' 00:05:30.581 07:03:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:30.581 07:03:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:30.581 07:03:52 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:30.581 07:03:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:30.581 07:03:52 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.581 07:03:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.843 ************************************ 00:05:30.843 START TEST nvmf_target_core 00:05:30.843 ************************************ 00:05:30.844 07:03:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:30.844 * Looking for test storage... 00:05:30.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:30.844 07:03:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:30.844 07:03:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:30.844 07:03:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:30.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.844 --rc genhtml_branch_coverage=1 00:05:30.844 --rc genhtml_function_coverage=1 00:05:30.844 --rc genhtml_legend=1 00:05:30.844 --rc geninfo_all_blocks=1 00:05:30.844 --rc geninfo_unexecuted_blocks=1 00:05:30.844 00:05:30.844 ' 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:30.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.844 --rc genhtml_branch_coverage=1 00:05:30.844 --rc genhtml_function_coverage=1 00:05:30.844 --rc genhtml_legend=1 00:05:30.844 --rc geninfo_all_blocks=1 00:05:30.844 --rc geninfo_unexecuted_blocks=1 00:05:30.844 00:05:30.844 ' 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:30.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.844 --rc genhtml_branch_coverage=1 00:05:30.844 --rc genhtml_function_coverage=1 00:05:30.844 --rc genhtml_legend=1 00:05:30.844 --rc geninfo_all_blocks=1 00:05:30.844 --rc geninfo_unexecuted_blocks=1 00:05:30.844 00:05:30.844 ' 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:30.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.844 --rc genhtml_branch_coverage=1 00:05:30.844 --rc genhtml_function_coverage=1 00:05:30.844 --rc genhtml_legend=1 00:05:30.844 --rc geninfo_all_blocks=1 00:05:30.844 --rc geninfo_unexecuted_blocks=1 00:05:30.844 00:05:30.844 ' 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:30.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.844 07:03:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:31.106 ************************************ 00:05:31.106 START TEST nvmf_abort 00:05:31.106 ************************************ 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:31.107 * Looking for test storage... 00:05:31.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.107 --rc genhtml_branch_coverage=1 00:05:31.107 --rc genhtml_function_coverage=1 00:05:31.107 --rc genhtml_legend=1 00:05:31.107 --rc geninfo_all_blocks=1 00:05:31.107 --rc geninfo_unexecuted_blocks=1 00:05:31.107 00:05:31.107 ' 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.107 --rc genhtml_branch_coverage=1 00:05:31.107 --rc genhtml_function_coverage=1 00:05:31.107 --rc genhtml_legend=1 00:05:31.107 --rc geninfo_all_blocks=1 00:05:31.107 --rc geninfo_unexecuted_blocks=1 00:05:31.107 00:05:31.107 ' 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.107 --rc genhtml_branch_coverage=1 00:05:31.107 --rc genhtml_function_coverage=1 00:05:31.107 --rc genhtml_legend=1 00:05:31.107 --rc geninfo_all_blocks=1 00:05:31.107 --rc geninfo_unexecuted_blocks=1 00:05:31.107 00:05:31.107 ' 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.107 --rc genhtml_branch_coverage=1 00:05:31.107 --rc genhtml_function_coverage=1 00:05:31.107 --rc genhtml_legend=1 00:05:31.107 --rc geninfo_all_blocks=1 00:05:31.107 --rc geninfo_unexecuted_blocks=1 00:05:31.107 00:05:31.107 ' 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:31.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:31.107 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:31.108 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:31.108 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:31.108 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:31.108 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:31.108 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:31.108 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:31.108 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:31.108 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:31.108 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:31.108 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:31.108 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:31.369 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:31.369 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:31.369 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:31.369 07:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:39.508 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:39.508 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:39.508 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:39.508 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:39.508 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:39.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:39.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:05:39.509 00:05:39.509 --- 10.0.0.2 ping statistics --- 00:05:39.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:39.509 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:39.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:39.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:05:39.509 00:05:39.509 --- 10.0.0.1 ping statistics --- 00:05:39.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:39.509 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3298285 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3298285 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3298285 ']' 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:39.509 07:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.509 [2024-11-20 07:04:00.924449] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:39.509 [2024-11-20 07:04:00.924516] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:39.509 [2024-11-20 07:04:01.025360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.509 [2024-11-20 07:04:01.079652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:39.509 [2024-11-20 07:04:01.079703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:39.509 [2024-11-20 07:04:01.079711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:39.509 [2024-11-20 07:04:01.079719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:39.509 [2024-11-20 07:04:01.079725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:39.509 [2024-11-20 07:04:01.081766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.509 [2024-11-20 07:04:01.081930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.509 [2024-11-20 07:04:01.081931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.509 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:39.509 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:39.509 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:39.509 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:39.509 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.770 [2024-11-20 07:04:01.801517] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.770 Malloc0 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.770 Delay0 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.770 [2024-11-20 07:04:01.884593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.770 07:04:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:39.770 [2024-11-20 07:04:01.993102] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:42.315 Initializing NVMe Controllers 00:05:42.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:42.315 controller IO queue size 128 less than required 00:05:42.315 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:42.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:42.315 Initialization complete. Launching workers. 00:05:42.315 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28395 00:05:42.315 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28456, failed to submit 62 00:05:42.315 success 28399, unsuccessful 57, failed 0 00:05:42.315 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:42.315 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.315 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.315 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.315 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:42.315 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:42.316 rmmod nvme_tcp 00:05:42.316 rmmod nvme_fabrics 00:05:42.316 rmmod nvme_keyring 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3298285 ']' 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3298285 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3298285 ']' 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3298285 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3298285 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3298285' 00:05:42.316 killing process with pid 3298285 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3298285 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3298285 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:42.316 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:44.229 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:44.230 00:05:44.230 real 0m13.239s 00:05:44.230 user 0m13.599s 00:05:44.230 sys 0m6.556s 00:05:44.230 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.230 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:44.230 ************************************ 00:05:44.230 END TEST nvmf_abort 00:05:44.230 ************************************ 00:05:44.230 07:04:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:44.230 07:04:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:44.230 07:04:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.230 07:04:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:44.230 ************************************ 00:05:44.230 START TEST nvmf_ns_hotplug_stress 00:05:44.230 ************************************ 00:05:44.230 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:44.554 * Looking for test storage... 00:05:44.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.554 --rc genhtml_branch_coverage=1 00:05:44.554 --rc genhtml_function_coverage=1 00:05:44.554 --rc genhtml_legend=1 00:05:44.554 --rc geninfo_all_blocks=1 00:05:44.554 --rc geninfo_unexecuted_blocks=1 00:05:44.554 00:05:44.554 ' 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.554 --rc genhtml_branch_coverage=1 00:05:44.554 --rc genhtml_function_coverage=1 00:05:44.554 --rc genhtml_legend=1 00:05:44.554 --rc geninfo_all_blocks=1 00:05:44.554 --rc geninfo_unexecuted_blocks=1 00:05:44.554 00:05:44.554 ' 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.554 --rc genhtml_branch_coverage=1 00:05:44.554 --rc genhtml_function_coverage=1 00:05:44.554 --rc genhtml_legend=1 00:05:44.554 --rc geninfo_all_blocks=1 00:05:44.554 --rc geninfo_unexecuted_blocks=1 00:05:44.554 00:05:44.554 ' 00:05:44.554 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.554 --rc genhtml_branch_coverage=1 00:05:44.554 --rc genhtml_function_coverage=1 00:05:44.554 --rc genhtml_legend=1 00:05:44.555 --rc geninfo_all_blocks=1 00:05:44.555 --rc geninfo_unexecuted_blocks=1 00:05:44.555 00:05:44.555 ' 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:44.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:44.555 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:52.695 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:52.695 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:52.695 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.695 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:52.696 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:52.696 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:52.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:52.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:05:52.696 00:05:52.696 --- 10.0.0.2 ping statistics --- 00:05:52.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.696 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:52.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:52.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:05:52.696 00:05:52.696 --- 10.0.0.1 ping statistics --- 00:05:52.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.696 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3303295 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3303295 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3303295 ']' 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:52.696 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.696 [2024-11-20 07:04:14.255218] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:52.696 [2024-11-20 07:04:14.255278] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:52.696 [2024-11-20 07:04:14.356603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.696 [2024-11-20 07:04:14.406911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:52.696 [2024-11-20 07:04:14.406962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:52.696 [2024-11-20 07:04:14.406970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:52.696 [2024-11-20 07:04:14.406977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:52.696 [2024-11-20 07:04:14.406983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:52.696 [2024-11-20 07:04:14.408801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.696 [2024-11-20 07:04:14.408952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.696 [2024-11-20 07:04:14.408952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.958 07:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.958 07:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:52.958 07:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:52.958 07:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.958 07:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.958 07:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:52.958 07:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:52.958 07:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:53.219 [2024-11-20 07:04:15.296488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.219 07:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:53.480 07:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:53.480 [2024-11-20 07:04:15.695517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:53.480 07:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:53.741 07:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:54.002 Malloc0 00:05:54.002 07:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:54.263 Delay0 00:05:54.263 07:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.263 07:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:54.523 NULL1 00:05:54.523 07:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:54.784 07:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3303853 00:05:54.784 07:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:05:54.784 07:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:54.784 07:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.784 Read completed with error (sct=0, sc=11) 00:05:54.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.044 07:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.044 07:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:55.044 07:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:55.305 true 00:05:55.305 07:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:05:55.305 07:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.245 07:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.245 07:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:56.245 07:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:56.505 true 00:05:56.505 07:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:05:56.505 07:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.766 07:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.766 07:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:56.766 07:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:57.026 true 00:05:57.026 07:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:05:57.026 07:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.409 07:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.409 07:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:58.409 07:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:58.409 true 00:05:58.669 07:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:05:58.669 07:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.500 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.500 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:59.500 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:59.761 true 00:05:59.761 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:05:59.761 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.022 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.022 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:00.022 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:00.282 true 00:06:00.282 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:00.282 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.544 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.544 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:00.544 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:00.805 true 00:06:00.805 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:00.805 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.066 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.066 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:01.066 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:01.327 true 00:06:01.327 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:01.327 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.712 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.713 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:02.713 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:02.973 true 00:06:02.973 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:02.973 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.915 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.915 07:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:03.915 07:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:03.915 true 00:06:04.176 07:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:04.176 07:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.176 07:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.436 07:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:04.436 07:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:04.436 true 00:06:04.697 07:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:04.697 07:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.641 07:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.901 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:05.901 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:06.162 true 00:06:06.162 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:06.162 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.133 07:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.133 07:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:07.133 07:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:07.394 true 00:06:07.394 07:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:07.394 07:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.394 07:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.653 07:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:07.653 07:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:07.913 true 00:06:07.914 07:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:07.914 07:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.854 07:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.113 07:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:09.113 07:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:09.373 true 00:06:09.373 07:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:09.373 07:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.313 07:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.313 07:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:10.313 07:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:10.573 true 00:06:10.573 07:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:10.573 07:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.573 07:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.835 07:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:10.835 07:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:11.095 true 00:06:11.095 07:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:11.095 07:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.357 07:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.357 07:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:11.357 07:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:11.617 true 00:06:11.617 07:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:11.617 07:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.581 07:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.581 07:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:12.581 07:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:12.870 true 00:06:12.870 07:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:12.870 07:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.870 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.168 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:13.168 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:13.168 true 00:06:13.428 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:13.428 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.428 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.688 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:13.688 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:13.950 true 00:06:13.950 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:13.950 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.950 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.210 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:14.210 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:14.478 true 00:06:14.478 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:14.478 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.860 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.860 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:15.860 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:15.860 true 00:06:15.860 07:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:15.860 07:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.801 07:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.061 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:17.061 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:17.061 true 00:06:17.061 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:17.061 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.321 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.582 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:17.582 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:17.582 true 00:06:17.843 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:17.843 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.843 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.104 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:18.104 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:18.104 true 00:06:18.104 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:18.104 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.366 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.628 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:18.628 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:18.628 true 00:06:18.628 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:18.628 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.888 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.148 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:19.148 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:19.148 true 00:06:19.148 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:19.148 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.408 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.667 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:19.667 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:19.667 true 00:06:19.928 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:19.928 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.928 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.188 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:20.188 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:20.450 true 00:06:20.450 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:20.450 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.450 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.712 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:20.712 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:20.973 true 00:06:20.973 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:20.973 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.973 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.233 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:21.233 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:21.493 true 00:06:21.493 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:21.493 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.754 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.754 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:21.754 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:22.015 true 00:06:22.015 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:22.015 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.276 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.276 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:22.276 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:22.536 true 00:06:22.536 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:22.536 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.796 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.796 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:22.796 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:23.055 true 00:06:23.055 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:23.055 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.316 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.577 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:23.577 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:23.577 true 00:06:23.577 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:23.577 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.837 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.099 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:24.099 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:24.099 true 00:06:24.099 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:24.099 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.484 Initializing NVMe Controllers 00:06:25.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:25.484 Controller IO queue size 128, less than required. 00:06:25.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:25.484 Controller IO queue size 128, less than required. 00:06:25.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:25.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:25.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:25.484 Initialization complete. Launching workers. 00:06:25.484 ======================================================== 00:06:25.484 Latency(us) 00:06:25.484 Device Information : IOPS MiB/s Average min max 00:06:25.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1813.57 0.89 34991.55 1463.76 1047446.61 00:06:25.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14695.77 7.18 8709.82 1146.01 401357.43 00:06:25.484 ======================================================== 00:06:25.484 Total : 16509.33 8.06 11596.90 1146.01 1047446.61 00:06:25.484 00:06:25.484 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.484 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:25.484 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:25.745 true 00:06:25.745 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3303853 00:06:25.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3303853) - No such process 00:06:25.745 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3303853 00:06:25.745 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.745 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:26.004 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:26.004 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:26.004 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:26.004 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.004 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:26.264 null0 00:06:26.264 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:26.264 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.264 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:26.264 null1 00:06:26.524 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:26.524 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.524 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:26.524 null2 00:06:26.524 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:26.524 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.524 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:26.783 null3 00:06:26.783 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:26.783 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.783 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:26.783 null4 00:06:27.044 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.044 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.044 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:27.044 null5 00:06:27.044 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.044 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.044 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:27.304 null6 00:06:27.304 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.304 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.304 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:27.564 null7 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.564 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3310503 3310504 3310506 3310508 3310510 3310512 3310514 3310516 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.565 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:27.827 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:27.827 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:27.827 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:27.827 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:27.827 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:27.827 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.827 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.827 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.827 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.088 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.088 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:28.089 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.089 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.089 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:28.089 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.089 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:28.089 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:28.089 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.089 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.089 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.089 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.089 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.089 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.350 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.351 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.612 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.874 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:28.874 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.874 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.874 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.874 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.874 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.874 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.874 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.874 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:28.874 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.874 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.134 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.395 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.657 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.657 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.657 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.657 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.657 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.657 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.657 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.657 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.657 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.657 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.658 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.917 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.917 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.917 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.918 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.178 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.439 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.439 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.439 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.439 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.439 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.439 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.439 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.439 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.440 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.699 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.700 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.959 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.959 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.959 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.960 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:31.219 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:31.219 rmmod nvme_tcp 00:06:31.479 rmmod nvme_fabrics 00:06:31.480 rmmod nvme_keyring 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3303295 ']' 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3303295 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3303295 ']' 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3303295 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3303295 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3303295' 00:06:31.480 killing process with pid 3303295 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3303295 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3303295 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.480 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.029 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:34.029 00:06:34.029 real 0m49.337s 00:06:34.029 user 3m14.848s 00:06:34.029 sys 0m16.155s 00:06:34.029 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:34.029 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:34.029 ************************************ 00:06:34.029 END TEST nvmf_ns_hotplug_stress 00:06:34.029 ************************************ 00:06:34.029 07:04:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:34.029 07:04:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:34.029 07:04:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:34.029 07:04:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:34.029 ************************************ 00:06:34.029 START TEST nvmf_delete_subsystem 00:06:34.029 ************************************ 00:06:34.029 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:34.029 * Looking for test storage... 00:06:34.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:34.029 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:34.029 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:34.029 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:34.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.029 --rc genhtml_branch_coverage=1 00:06:34.029 --rc genhtml_function_coverage=1 00:06:34.029 --rc genhtml_legend=1 00:06:34.029 --rc geninfo_all_blocks=1 00:06:34.029 --rc geninfo_unexecuted_blocks=1 00:06:34.029 00:06:34.029 ' 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:34.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.029 --rc genhtml_branch_coverage=1 00:06:34.029 --rc genhtml_function_coverage=1 00:06:34.029 --rc genhtml_legend=1 00:06:34.029 --rc geninfo_all_blocks=1 00:06:34.029 --rc geninfo_unexecuted_blocks=1 00:06:34.029 00:06:34.029 ' 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:34.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.029 --rc genhtml_branch_coverage=1 00:06:34.029 --rc genhtml_function_coverage=1 00:06:34.029 --rc genhtml_legend=1 00:06:34.029 --rc geninfo_all_blocks=1 00:06:34.029 --rc geninfo_unexecuted_blocks=1 00:06:34.029 00:06:34.029 ' 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:34.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.029 --rc genhtml_branch_coverage=1 00:06:34.029 --rc genhtml_function_coverage=1 00:06:34.029 --rc genhtml_legend=1 00:06:34.029 --rc geninfo_all_blocks=1 00:06:34.029 --rc geninfo_unexecuted_blocks=1 00:06:34.029 00:06:34.029 ' 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.029 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:34.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:34.030 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.165 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:42.166 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:42.166 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:42.166 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:42.166 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:42.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:06:42.166 00:06:42.166 --- 10.0.0.2 ping statistics --- 00:06:42.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.166 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:06:42.166 00:06:42.166 --- 10.0.0.1 ping statistics --- 00:06:42.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.166 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3315682 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3315682 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3315682 ']' 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.166 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.167 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.167 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.167 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.167 [2024-11-20 07:05:03.605977] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:06:42.167 [2024-11-20 07:05:03.606043] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.167 [2024-11-20 07:05:03.705814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.167 [2024-11-20 07:05:03.757052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.167 [2024-11-20 07:05:03.757103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.167 [2024-11-20 07:05:03.757112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.167 [2024-11-20 07:05:03.757118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.167 [2024-11-20 07:05:03.757125] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.167 [2024-11-20 07:05:03.758903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.167 [2024-11-20 07:05:03.758908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.167 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:42.167 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:42.167 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:42.167 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.167 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.428 [2024-11-20 07:05:04.458119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.428 [2024-11-20 07:05:04.482430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.428 NULL1 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.428 Delay0 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3315848 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:42.428 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:42.428 [2024-11-20 07:05:04.609354] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:44.342 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:44.342 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.342 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.603 Write completed with error (sct=0, sc=8) 00:06:44.603 Read completed with error (sct=0, sc=8) 00:06:44.603 starting I/O failed: -6 00:06:44.603 Read completed with error (sct=0, sc=8) 00:06:44.603 Read completed with error (sct=0, sc=8) 00:06:44.603 Read completed with error (sct=0, sc=8) 00:06:44.603 Read completed with error (sct=0, sc=8) 00:06:44.603 starting I/O failed: -6 00:06:44.603 Write completed with error (sct=0, sc=8) 00:06:44.603 Read completed with error (sct=0, sc=8) 00:06:44.603 Write completed with error (sct=0, sc=8) 00:06:44.603 Write completed with error (sct=0, sc=8) 00:06:44.603 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 [2024-11-20 07:05:06.778272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79d2c0 is same with the state(6) to be set 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Write completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.604 starting I/O failed: -6 00:06:44.604 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 starting I/O failed: -6 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 starting I/O failed: -6 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 starting I/O failed: -6 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 starting I/O failed: -6 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 [2024-11-20 07:05:06.781293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f06a800d350 is same with the state(6) to be set 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Write completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:44.605 Read completed with error (sct=0, sc=8) 00:06:45.551 [2024-11-20 07:05:07.749344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e9a0 is same with the state(6) to be set 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 [2024-11-20 07:05:07.782430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79d4a0 is same with the state(6) to be set 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 [2024-11-20 07:05:07.783563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79d860 is same with the state(6) to be set 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 [2024-11-20 07:05:07.783987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f06a800d020 is same with the state(6) to be set 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 Read completed with error (sct=0, sc=8) 00:06:45.551 Write completed with error (sct=0, sc=8) 00:06:45.551 [2024-11-20 07:05:07.784237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f06a800d680 is same with the state(6) to be set 00:06:45.551 Initializing NVMe Controllers 00:06:45.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:45.551 Controller IO queue size 128, less than required. 00:06:45.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:45.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:45.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:45.551 Initialization complete. Launching workers. 00:06:45.551 ======================================================== 00:06:45.551 Latency(us) 00:06:45.551 Device Information : IOPS MiB/s Average min max 00:06:45.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.44 0.09 894610.18 460.15 1011098.17 00:06:45.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.66 0.07 943209.60 266.39 1012154.13 00:06:45.551 ======================================================== 00:06:45.551 Total : 341.11 0.17 916076.10 266.39 1012154.13 00:06:45.551 00:06:45.551 [2024-11-20 07:05:07.784619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79e9a0 (9): Bad file descriptor 00:06:45.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:45.551 07:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.551 07:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:45.551 07:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3315848 00:06:45.551 07:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3315848 00:06:46.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3315848) - No such process 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3315848 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3315848 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3315848 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:46.124 [2024-11-20 07:05:08.316845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3316710 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3316710 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:46.124 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:46.385 [2024-11-20 07:05:08.420391] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:46.645 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:46.645 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3316710 00:06:46.645 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.216 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.216 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3316710 00:06:47.216 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.844 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.844 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3316710 00:06:47.844 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:48.105 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:48.105 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3316710 00:06:48.105 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:48.676 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:48.676 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3316710 00:06:48.676 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:49.248 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:49.248 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3316710 00:06:49.248 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:49.508 Initializing NVMe Controllers 00:06:49.508 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:49.508 Controller IO queue size 128, less than required. 00:06:49.508 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:49.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:49.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:49.508 Initialization complete. Launching workers. 00:06:49.508 ======================================================== 00:06:49.508 Latency(us) 00:06:49.508 Device Information : IOPS MiB/s Average min max 00:06:49.508 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001836.44 1000188.80 1005460.90 00:06:49.508 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002815.62 1000369.38 1007815.83 00:06:49.508 ======================================================== 00:06:49.508 Total : 256.00 0.12 1002326.03 1000188.80 1007815.83 00:06:49.508 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3316710 00:06:49.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3316710) - No such process 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3316710 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:49.768 rmmod nvme_tcp 00:06:49.768 rmmod nvme_fabrics 00:06:49.768 rmmod nvme_keyring 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3315682 ']' 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3315682 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3315682 ']' 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3315682 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3315682 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3315682' 00:06:49.768 killing process with pid 3315682 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3315682 00:06:49.768 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3315682 00:06:50.028 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:50.028 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:50.028 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:50.028 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:50.028 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:50.028 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:50.028 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:50.028 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:50.028 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:50.028 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.028 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.029 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.940 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:51.940 00:06:51.940 real 0m18.299s 00:06:51.940 user 0m30.769s 00:06:51.940 sys 0m6.804s 00:06:51.940 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.940 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.940 ************************************ 00:06:51.940 END TEST nvmf_delete_subsystem 00:06:51.940 ************************************ 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:52.202 ************************************ 00:06:52.202 START TEST nvmf_host_management 00:06:52.202 ************************************ 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:52.202 * Looking for test storage... 00:06:52.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:52.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.202 --rc genhtml_branch_coverage=1 00:06:52.202 --rc genhtml_function_coverage=1 00:06:52.202 --rc genhtml_legend=1 00:06:52.202 --rc geninfo_all_blocks=1 00:06:52.202 --rc geninfo_unexecuted_blocks=1 00:06:52.202 00:06:52.202 ' 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:52.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.202 --rc genhtml_branch_coverage=1 00:06:52.202 --rc genhtml_function_coverage=1 00:06:52.202 --rc genhtml_legend=1 00:06:52.202 --rc geninfo_all_blocks=1 00:06:52.202 --rc geninfo_unexecuted_blocks=1 00:06:52.202 00:06:52.202 ' 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:52.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.202 --rc genhtml_branch_coverage=1 00:06:52.202 --rc genhtml_function_coverage=1 00:06:52.202 --rc genhtml_legend=1 00:06:52.202 --rc geninfo_all_blocks=1 00:06:52.202 --rc geninfo_unexecuted_blocks=1 00:06:52.202 00:06:52.202 ' 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:52.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.202 --rc genhtml_branch_coverage=1 00:06:52.202 --rc genhtml_function_coverage=1 00:06:52.202 --rc genhtml_legend=1 00:06:52.202 --rc geninfo_all_blocks=1 00:06:52.202 --rc geninfo_unexecuted_blocks=1 00:06:52.202 00:06:52.202 ' 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.202 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.203 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:52.203 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.464 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:52.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:52.465 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:00.605 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:00.605 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.605 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:00.606 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:00.606 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:00.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:07:00.606 00:07:00.606 --- 10.0.0.2 ping statistics --- 00:07:00.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.606 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:07:00.606 00:07:00.606 --- 10.0.0.1 ping statistics --- 00:07:00.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.606 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3321588 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3321588 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3321588 ']' 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:00.606 07:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.606 [2024-11-20 07:05:22.019923] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:07:00.607 [2024-11-20 07:05:22.019990] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.607 [2024-11-20 07:05:22.119930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.607 [2024-11-20 07:05:22.173687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.607 [2024-11-20 07:05:22.173740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.607 [2024-11-20 07:05:22.173749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.607 [2024-11-20 07:05:22.173756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.607 [2024-11-20 07:05:22.173762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.607 [2024-11-20 07:05:22.176099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.607 [2024-11-20 07:05:22.176267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.607 [2024-11-20 07:05:22.176580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:00.607 [2024-11-20 07:05:22.176583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.607 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:00.607 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:00.607 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:00.607 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:00.607 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.867 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.868 [2024-11-20 07:05:22.900383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.868 Malloc0 00:07:00.868 [2024-11-20 07:05:22.982830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:00.868 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3321794 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3321794 /var/tmp/bdevperf.sock 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3321794 ']' 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:00.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:00.868 { 00:07:00.868 "params": { 00:07:00.868 "name": "Nvme$subsystem", 00:07:00.868 "trtype": "$TEST_TRANSPORT", 00:07:00.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:00.868 "adrfam": "ipv4", 00:07:00.868 "trsvcid": "$NVMF_PORT", 00:07:00.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:00.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:00.868 "hdgst": ${hdgst:-false}, 00:07:00.868 "ddgst": ${ddgst:-false} 00:07:00.868 }, 00:07:00.868 "method": "bdev_nvme_attach_controller" 00:07:00.868 } 00:07:00.868 EOF 00:07:00.868 )") 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:00.868 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:00.868 "params": { 00:07:00.868 "name": "Nvme0", 00:07:00.868 "trtype": "tcp", 00:07:00.868 "traddr": "10.0.0.2", 00:07:00.868 "adrfam": "ipv4", 00:07:00.868 "trsvcid": "4420", 00:07:00.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:00.868 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:00.868 "hdgst": false, 00:07:00.868 "ddgst": false 00:07:00.868 }, 00:07:00.868 "method": "bdev_nvme_attach_controller" 00:07:00.868 }' 00:07:00.868 [2024-11-20 07:05:23.092339] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:07:00.868 [2024-11-20 07:05:23.092407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321794 ] 00:07:01.129 [2024-11-20 07:05:23.187174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.129 [2024-11-20 07:05:23.241345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.446 Running I/O for 10 seconds... 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.730 [2024-11-20 07:05:23.990724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:23.990796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:23.990805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:23.990813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:23.990820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:23.990828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:23.990835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:23.990841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:23.990848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:23.990855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:23.990861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:23.990868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:23.990875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:23.990881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:23.990888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf150 is same with the state(6) to be set 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.730 07:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.730 [2024-11-20 07:05:23.998928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:01.730 [2024-11-20 07:05:23.998997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.730 [2024-11-20 07:05:23.999009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:01.730 [2024-11-20 07:05:23.999017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.730 [2024-11-20 07:05:23.999025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:01.730 [2024-11-20 07:05:23.999033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.730 [2024-11-20 07:05:23.999042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:01.730 [2024-11-20 07:05:23.999050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.730 [2024-11-20 07:05:23.999057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2304000 is same with the state(6) to be set 00:07:01.730 [2024-11-20 07:05:24.000301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.730 [2024-11-20 07:05:24.000325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.730 [2024-11-20 07:05:24.000342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.000988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.000995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.001005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.731 [2024-11-20 07:05:24.001012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.731 [2024-11-20 07:05:24.001022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.732 [2024-11-20 07:05:24.001476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.732 [2024-11-20 07:05:24.001483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.008 [2024-11-20 07:05:24.002757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:02.008 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:02.008 00:07:02.008 Latency(us) 00:07:02.008 [2024-11-20T06:05:24.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.008 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:02.008 Job: Nvme0n1 ended in about 0.43 seconds with error 00:07:02.008 Verification LBA range: start 0x0 length 0x400 00:07:02.008 Nvme0n1 : 0.43 1481.86 92.62 148.19 0.00 38095.37 1713.49 35607.89 00:07:02.008 [2024-11-20T06:05:24.286Z] =================================================================================================================== 00:07:02.008 [2024-11-20T06:05:24.286Z] Total : 1481.86 92.62 148.19 0.00 38095.37 1713.49 35607.89 00:07:02.008 [2024-11-20 07:05:24.004973] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.008 [2024-11-20 07:05:24.005009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2304000 (9): Bad file descriptor 00:07:02.008 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.008 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:02.008 [2024-11-20 07:05:24.020685] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:02.949 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3321794 00:07:02.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3321794) - No such process 00:07:02.949 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:02.949 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:02.949 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:02.949 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:02.949 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:02.949 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:02.949 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:02.949 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:02.949 { 00:07:02.949 "params": { 00:07:02.949 "name": "Nvme$subsystem", 00:07:02.949 "trtype": "$TEST_TRANSPORT", 00:07:02.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:02.949 "adrfam": "ipv4", 00:07:02.949 "trsvcid": "$NVMF_PORT", 00:07:02.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:02.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:02.949 "hdgst": ${hdgst:-false}, 00:07:02.949 "ddgst": ${ddgst:-false} 00:07:02.949 }, 00:07:02.949 "method": "bdev_nvme_attach_controller" 00:07:02.949 } 00:07:02.949 EOF 00:07:02.949 )") 00:07:02.949 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:02.949 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:02.949 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:02.949 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:02.950 "params": { 00:07:02.950 "name": "Nvme0", 00:07:02.950 "trtype": "tcp", 00:07:02.950 "traddr": "10.0.0.2", 00:07:02.950 "adrfam": "ipv4", 00:07:02.950 "trsvcid": "4420", 00:07:02.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:02.950 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:02.950 "hdgst": false, 00:07:02.950 "ddgst": false 00:07:02.950 }, 00:07:02.950 "method": "bdev_nvme_attach_controller" 00:07:02.950 }' 00:07:02.950 [2024-11-20 07:05:25.068422] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:07:02.950 [2024-11-20 07:05:25.068478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322158 ] 00:07:02.950 [2024-11-20 07:05:25.156504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.950 [2024-11-20 07:05:25.191923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.211 Running I/O for 1 seconds... 00:07:04.152 1598.00 IOPS, 99.88 MiB/s 00:07:04.152 Latency(us) 00:07:04.152 [2024-11-20T06:05:26.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.152 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:04.152 Verification LBA range: start 0x0 length 0x400 00:07:04.152 Nvme0n1 : 1.02 1625.38 101.59 0.00 0.00 38699.25 6362.45 33423.36 00:07:04.152 [2024-11-20T06:05:26.430Z] =================================================================================================================== 00:07:04.152 [2024-11-20T06:05:26.430Z] Total : 1625.38 101.59 0.00 0.00 38699.25 6362.45 33423.36 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:04.413 rmmod nvme_tcp 00:07:04.413 rmmod nvme_fabrics 00:07:04.413 rmmod nvme_keyring 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3321588 ']' 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3321588 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3321588 ']' 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3321588 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3321588 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3321588' 00:07:04.413 killing process with pid 3321588 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3321588 00:07:04.413 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3321588 00:07:04.674 [2024-11-20 07:05:26.762330] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:04.674 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:04.674 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:04.674 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:04.674 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:04.674 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:04.674 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:04.674 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:04.674 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:04.674 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:04.674 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.674 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.674 07:05:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.587 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:06.847 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:06.847 00:07:06.847 real 0m14.608s 00:07:06.847 user 0m23.083s 00:07:06.847 sys 0m6.783s 00:07:06.847 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:06.847 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.847 ************************************ 00:07:06.847 END TEST nvmf_host_management 00:07:06.847 ************************************ 00:07:06.847 07:05:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:06.847 07:05:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:06.847 07:05:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.847 07:05:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:06.847 ************************************ 00:07:06.847 START TEST nvmf_lvol 00:07:06.847 ************************************ 00:07:06.847 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:06.847 * Looking for test storage... 00:07:06.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.847 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:06.847 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:06.847 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:07.109 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:07.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.110 --rc genhtml_branch_coverage=1 00:07:07.110 --rc genhtml_function_coverage=1 00:07:07.110 --rc genhtml_legend=1 00:07:07.110 --rc geninfo_all_blocks=1 00:07:07.110 --rc geninfo_unexecuted_blocks=1 00:07:07.110 00:07:07.110 ' 00:07:07.110 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:07.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.110 --rc genhtml_branch_coverage=1 00:07:07.110 --rc genhtml_function_coverage=1 00:07:07.110 --rc genhtml_legend=1 00:07:07.110 --rc geninfo_all_blocks=1 00:07:07.110 --rc geninfo_unexecuted_blocks=1 00:07:07.110 00:07:07.110 ' 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:07.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.111 --rc genhtml_branch_coverage=1 00:07:07.111 --rc genhtml_function_coverage=1 00:07:07.111 --rc genhtml_legend=1 00:07:07.111 --rc geninfo_all_blocks=1 00:07:07.111 --rc geninfo_unexecuted_blocks=1 00:07:07.111 00:07:07.111 ' 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:07.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.111 --rc genhtml_branch_coverage=1 00:07:07.111 --rc genhtml_function_coverage=1 00:07:07.111 --rc genhtml_legend=1 00:07:07.111 --rc geninfo_all_blocks=1 00:07:07.111 --rc geninfo_unexecuted_blocks=1 00:07:07.111 00:07:07.111 ' 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.111 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.112 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.112 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.112 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.112 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:07.113 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:07.114 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:15.253 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:15.253 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:15.253 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:15.253 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:15.253 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:15.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:07:15.254 00:07:15.254 --- 10.0.0.2 ping statistics --- 00:07:15.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.254 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:07:15.254 00:07:15.254 --- 10.0.0.1 ping statistics --- 00:07:15.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.254 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3326835 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3326835 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3326835 ']' 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:15.254 07:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.254 [2024-11-20 07:05:36.761304] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:07:15.254 [2024-11-20 07:05:36.761369] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.254 [2024-11-20 07:05:36.863023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.254 [2024-11-20 07:05:36.915340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.254 [2024-11-20 07:05:36.915389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.254 [2024-11-20 07:05:36.915399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.254 [2024-11-20 07:05:36.915406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.254 [2024-11-20 07:05:36.915413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.254 [2024-11-20 07:05:36.917502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.254 [2024-11-20 07:05:36.917663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.254 [2024-11-20 07:05:36.917663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.516 07:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:15.516 07:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:15.516 07:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:15.516 07:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.516 07:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.516 07:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.516 07:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:15.516 [2024-11-20 07:05:37.784753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.778 07:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:15.778 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:15.778 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:16.040 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:16.040 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:16.302 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:16.563 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6dd22db7-697f-4796-b40b-d89d452591f4 00:07:16.563 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6dd22db7-697f-4796-b40b-d89d452591f4 lvol 20 00:07:16.824 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=44900bdf-f8f1-416f-9384-534cd3cd5d04 00:07:16.824 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:16.824 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 44900bdf-f8f1-416f-9384-534cd3cd5d04 00:07:17.086 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:17.347 [2024-11-20 07:05:39.393217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.347 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.347 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3327507 00:07:17.347 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:17.347 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:18.733 07:05:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 44900bdf-f8f1-416f-9384-534cd3cd5d04 MY_SNAPSHOT 00:07:18.733 07:05:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=208d9315-888d-4813-a93c-215f471df4a8 00:07:18.733 07:05:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 44900bdf-f8f1-416f-9384-534cd3cd5d04 30 00:07:18.995 07:05:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 208d9315-888d-4813-a93c-215f471df4a8 MY_CLONE 00:07:18.995 07:05:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6ad6ed64-64b4-43fe-80ee-c8e9c8d717d6 00:07:18.995 07:05:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6ad6ed64-64b4-43fe-80ee-c8e9c8d717d6 00:07:19.567 07:05:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3327507 00:07:27.703 Initializing NVMe Controllers 00:07:27.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:27.703 Controller IO queue size 128, less than required. 00:07:27.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:27.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:27.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:27.703 Initialization complete. Launching workers. 00:07:27.703 ======================================================== 00:07:27.703 Latency(us) 00:07:27.703 Device Information : IOPS MiB/s Average min max 00:07:27.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16497.97 64.45 7760.65 1592.25 45849.50 00:07:27.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17627.87 68.86 7264.21 386.20 57322.75 00:07:27.703 ======================================================== 00:07:27.703 Total : 34125.84 133.30 7504.21 386.20 57322.75 00:07:27.703 00:07:27.703 07:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:27.963 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 44900bdf-f8f1-416f-9384-534cd3cd5d04 00:07:28.225 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6dd22db7-697f-4796-b40b-d89d452591f4 00:07:28.225 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:28.225 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:28.225 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:28.225 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:28.225 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:28.225 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.486 rmmod nvme_tcp 00:07:28.486 rmmod nvme_fabrics 00:07:28.486 rmmod nvme_keyring 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3326835 ']' 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3326835 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3326835 ']' 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3326835 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3326835 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3326835' 00:07:28.486 killing process with pid 3326835 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3326835 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3326835 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.486 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.031 07:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:31.031 00:07:31.031 real 0m23.879s 00:07:31.031 user 1m4.675s 00:07:31.031 sys 0m8.627s 00:07:31.031 07:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.031 07:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:31.031 ************************************ 00:07:31.031 END TEST nvmf_lvol 00:07:31.031 ************************************ 00:07:31.031 07:05:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:31.031 07:05:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:31.031 07:05:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.031 07:05:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:31.031 ************************************ 00:07:31.031 START TEST nvmf_lvs_grow 00:07:31.031 ************************************ 00:07:31.031 07:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:31.031 * Looking for test storage... 00:07:31.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.031 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:31.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.031 --rc genhtml_branch_coverage=1 00:07:31.031 --rc genhtml_function_coverage=1 00:07:31.031 --rc genhtml_legend=1 00:07:31.031 --rc geninfo_all_blocks=1 00:07:31.031 --rc geninfo_unexecuted_blocks=1 00:07:31.031 00:07:31.031 ' 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:31.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.032 --rc genhtml_branch_coverage=1 00:07:31.032 --rc genhtml_function_coverage=1 00:07:31.032 --rc genhtml_legend=1 00:07:31.032 --rc geninfo_all_blocks=1 00:07:31.032 --rc geninfo_unexecuted_blocks=1 00:07:31.032 00:07:31.032 ' 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:31.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.032 --rc genhtml_branch_coverage=1 00:07:31.032 --rc genhtml_function_coverage=1 00:07:31.032 --rc genhtml_legend=1 00:07:31.032 --rc geninfo_all_blocks=1 00:07:31.032 --rc geninfo_unexecuted_blocks=1 00:07:31.032 00:07:31.032 ' 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:31.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.032 --rc genhtml_branch_coverage=1 00:07:31.032 --rc genhtml_function_coverage=1 00:07:31.032 --rc genhtml_legend=1 00:07:31.032 --rc geninfo_all_blocks=1 00:07:31.032 --rc geninfo_unexecuted_blocks=1 00:07:31.032 00:07:31.032 ' 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:31.032 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:39.171 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:39.171 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:39.171 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:39.172 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:39.172 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:39.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:07:39.172 00:07:39.172 --- 10.0.0.2 ping statistics --- 00:07:39.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.172 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:07:39.172 00:07:39.172 --- 10.0.0.1 ping statistics --- 00:07:39.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.172 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3333936 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3333936 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3333936 ']' 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:39.172 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:39.172 [2024-11-20 07:06:00.696490] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:07:39.172 [2024-11-20 07:06:00.696567] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.172 [2024-11-20 07:06:00.795545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.172 [2024-11-20 07:06:00.847058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.172 [2024-11-20 07:06:00.847109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.172 [2024-11-20 07:06:00.847117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.172 [2024-11-20 07:06:00.847124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.172 [2024-11-20 07:06:00.847130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.172 [2024-11-20 07:06:00.847886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.434 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:39.434 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:39.434 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:39.434 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.434 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:39.434 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.434 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:39.695 [2024-11-20 07:06:01.724505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.695 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:39.695 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:39.695 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:39.695 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:39.695 ************************************ 00:07:39.695 START TEST lvs_grow_clean 00:07:39.695 ************************************ 00:07:39.695 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:39.695 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:39.695 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:39.695 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:39.695 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:39.695 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:39.695 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:39.695 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:39.695 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:39.695 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:39.956 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:39.956 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:39.956 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=33526aaa-896a-4501-8fe2-784708753a22 00:07:39.956 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33526aaa-896a-4501-8fe2-784708753a22 00:07:39.956 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:40.217 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:40.217 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:40.217 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 33526aaa-896a-4501-8fe2-784708753a22 lvol 150 00:07:40.478 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b64c6237-0e91-4351-ae5c-f19aac5f2004 00:07:40.478 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:40.478 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:40.739 [2024-11-20 07:06:02.757186] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:40.739 [2024-11-20 07:06:02.757260] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:40.739 true 00:07:40.739 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33526aaa-896a-4501-8fe2-784708753a22 00:07:40.739 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:40.739 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:40.739 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:41.000 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b64c6237-0e91-4351-ae5c-f19aac5f2004 00:07:41.261 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:41.261 [2024-11-20 07:06:03.527645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.523 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:41.523 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3334661 00:07:41.523 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:41.523 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:41.523 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3334661 /var/tmp/bdevperf.sock 00:07:41.523 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3334661 ']' 00:07:41.523 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:41.523 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:41.523 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:41.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:41.523 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:41.523 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:41.523 [2024-11-20 07:06:03.766467] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:07:41.524 [2024-11-20 07:06:03.766538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3334661 ] 00:07:41.785 [2024-11-20 07:06:03.857081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.785 [2024-11-20 07:06:03.909442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.357 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:42.357 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:42.357 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:42.929 Nvme0n1 00:07:42.929 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:42.929 [ 00:07:42.929 { 00:07:42.929 "name": "Nvme0n1", 00:07:42.929 "aliases": [ 00:07:42.929 "b64c6237-0e91-4351-ae5c-f19aac5f2004" 00:07:42.929 ], 00:07:42.929 "product_name": "NVMe disk", 00:07:42.929 "block_size": 4096, 00:07:42.929 "num_blocks": 38912, 00:07:42.929 "uuid": "b64c6237-0e91-4351-ae5c-f19aac5f2004", 00:07:42.929 "numa_id": 0, 00:07:42.929 "assigned_rate_limits": { 00:07:42.929 "rw_ios_per_sec": 0, 00:07:42.929 "rw_mbytes_per_sec": 0, 00:07:42.929 "r_mbytes_per_sec": 0, 00:07:42.929 "w_mbytes_per_sec": 0 00:07:42.929 }, 00:07:42.929 "claimed": false, 00:07:42.929 "zoned": false, 00:07:42.929 "supported_io_types": { 00:07:42.929 "read": true, 00:07:42.929 "write": true, 00:07:42.929 "unmap": true, 00:07:42.929 "flush": true, 00:07:42.929 "reset": true, 00:07:42.929 "nvme_admin": true, 00:07:42.929 "nvme_io": true, 00:07:42.929 "nvme_io_md": false, 00:07:42.929 "write_zeroes": true, 00:07:42.929 "zcopy": false, 00:07:42.929 "get_zone_info": false, 00:07:42.929 "zone_management": false, 00:07:42.929 "zone_append": false, 00:07:42.929 "compare": true, 00:07:42.929 "compare_and_write": true, 00:07:42.929 "abort": true, 00:07:42.929 "seek_hole": false, 00:07:42.929 "seek_data": false, 00:07:42.929 "copy": true, 00:07:42.929 "nvme_iov_md": false 00:07:42.929 }, 00:07:42.929 "memory_domains": [ 00:07:42.929 { 00:07:42.929 "dma_device_id": "system", 00:07:42.929 "dma_device_type": 1 00:07:42.929 } 00:07:42.929 ], 00:07:42.929 "driver_specific": { 00:07:42.929 "nvme": [ 00:07:42.929 { 00:07:42.929 "trid": { 00:07:42.929 "trtype": "TCP", 00:07:42.929 "adrfam": "IPv4", 00:07:42.929 "traddr": "10.0.0.2", 00:07:42.929 "trsvcid": "4420", 00:07:42.929 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:42.929 }, 00:07:42.929 "ctrlr_data": { 00:07:42.929 "cntlid": 1, 00:07:42.929 "vendor_id": "0x8086", 00:07:42.929 "model_number": "SPDK bdev Controller", 00:07:42.929 "serial_number": "SPDK0", 00:07:42.929 "firmware_revision": "25.01", 00:07:42.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:42.929 "oacs": { 00:07:42.929 "security": 0, 00:07:42.929 "format": 0, 00:07:42.929 "firmware": 0, 00:07:42.929 "ns_manage": 0 00:07:42.929 }, 00:07:42.929 "multi_ctrlr": true, 00:07:42.929 "ana_reporting": false 00:07:42.929 }, 00:07:42.929 "vs": { 00:07:42.929 "nvme_version": "1.3" 00:07:42.929 }, 00:07:42.930 "ns_data": { 00:07:42.930 "id": 1, 00:07:42.930 "can_share": true 00:07:42.930 } 00:07:42.930 } 00:07:42.930 ], 00:07:42.930 "mp_policy": "active_passive" 00:07:42.930 } 00:07:42.930 } 00:07:42.930 ] 00:07:42.930 07:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3334859 00:07:42.930 07:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:42.930 07:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:43.190 Running I/O for 10 seconds... 00:07:44.132 Latency(us) 00:07:44.132 [2024-11-20T06:06:06.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.132 Nvme0n1 : 1.00 25001.00 97.66 0.00 0.00 0.00 0.00 0.00 00:07:44.132 [2024-11-20T06:06:06.410Z] =================================================================================================================== 00:07:44.132 [2024-11-20T06:06:06.410Z] Total : 25001.00 97.66 0.00 0.00 0.00 0.00 0.00 00:07:44.132 00:07:45.072 07:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 33526aaa-896a-4501-8fe2-784708753a22 00:07:45.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.072 Nvme0n1 : 2.00 25170.50 98.32 0.00 0.00 0.00 0.00 0.00 00:07:45.072 [2024-11-20T06:06:07.350Z] =================================================================================================================== 00:07:45.072 [2024-11-20T06:06:07.350Z] Total : 25170.50 98.32 0.00 0.00 0.00 0.00 0.00 00:07:45.072 00:07:45.072 true 00:07:45.073 07:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33526aaa-896a-4501-8fe2-784708753a22 00:07:45.073 07:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:45.333 07:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:45.333 07:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:45.333 07:06:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3334859 00:07:46.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.274 Nvme0n1 : 3.00 25256.00 98.66 0.00 0.00 0.00 0.00 0.00 00:07:46.274 [2024-11-20T06:06:08.552Z] =================================================================================================================== 00:07:46.274 [2024-11-20T06:06:08.552Z] Total : 25256.00 98.66 0.00 0.00 0.00 0.00 0.00 00:07:46.274 00:07:47.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.215 Nvme0n1 : 4.00 25317.75 98.90 0.00 0.00 0.00 0.00 0.00 00:07:47.215 [2024-11-20T06:06:09.493Z] =================================================================================================================== 00:07:47.215 [2024-11-20T06:06:09.493Z] Total : 25317.75 98.90 0.00 0.00 0.00 0.00 0.00 00:07:47.215 00:07:48.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.156 Nvme0n1 : 5.00 25333.20 98.96 0.00 0.00 0.00 0.00 0.00 00:07:48.156 [2024-11-20T06:06:10.434Z] =================================================================================================================== 00:07:48.156 [2024-11-20T06:06:10.434Z] Total : 25333.20 98.96 0.00 0.00 0.00 0.00 0.00 00:07:48.156 00:07:49.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.097 Nvme0n1 : 6.00 25365.33 99.08 0.00 0.00 0.00 0.00 0.00 00:07:49.097 [2024-11-20T06:06:11.375Z] =================================================================================================================== 00:07:49.097 [2024-11-20T06:06:11.375Z] Total : 25365.33 99.08 0.00 0.00 0.00 0.00 0.00 00:07:49.097 00:07:50.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.040 Nvme0n1 : 7.00 25389.57 99.18 0.00 0.00 0.00 0.00 0.00 00:07:50.040 [2024-11-20T06:06:12.318Z] =================================================================================================================== 00:07:50.040 [2024-11-20T06:06:12.318Z] Total : 25389.57 99.18 0.00 0.00 0.00 0.00 0.00 00:07:50.040 00:07:51.425 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.425 Nvme0n1 : 8.00 25408.00 99.25 0.00 0.00 0.00 0.00 0.00 00:07:51.425 [2024-11-20T06:06:13.703Z] =================================================================================================================== 00:07:51.425 [2024-11-20T06:06:13.704Z] Total : 25408.00 99.25 0.00 0.00 0.00 0.00 0.00 00:07:51.426 00:07:51.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.996 Nvme0n1 : 9.00 25429.00 99.33 0.00 0.00 0.00 0.00 0.00 00:07:51.996 [2024-11-20T06:06:14.274Z] =================================================================================================================== 00:07:51.996 [2024-11-20T06:06:14.274Z] Total : 25429.00 99.33 0.00 0.00 0.00 0.00 0.00 00:07:51.996 00:07:53.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.380 Nvme0n1 : 10.00 25446.10 99.40 0.00 0.00 0.00 0.00 0.00 00:07:53.380 [2024-11-20T06:06:15.658Z] =================================================================================================================== 00:07:53.380 [2024-11-20T06:06:15.658Z] Total : 25446.10 99.40 0.00 0.00 0.00 0.00 0.00 00:07:53.380 00:07:53.380 00:07:53.380 Latency(us) 00:07:53.380 [2024-11-20T06:06:15.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.380 Nvme0n1 : 10.00 25441.41 99.38 0.00 0.00 5027.58 2484.91 12069.55 00:07:53.380 [2024-11-20T06:06:15.658Z] =================================================================================================================== 00:07:53.380 [2024-11-20T06:06:15.658Z] Total : 25441.41 99.38 0.00 0.00 5027.58 2484.91 12069.55 00:07:53.380 { 00:07:53.380 "results": [ 00:07:53.380 { 00:07:53.380 "job": "Nvme0n1", 00:07:53.380 "core_mask": "0x2", 00:07:53.380 "workload": "randwrite", 00:07:53.380 "status": "finished", 00:07:53.380 "queue_depth": 128, 00:07:53.380 "io_size": 4096, 00:07:53.380 "runtime": 10.0044, 00:07:53.380 "iops": 25441.40578145616, 00:07:53.380 "mibps": 99.38049133381313, 00:07:53.380 "io_failed": 0, 00:07:53.380 "io_timeout": 0, 00:07:53.380 "avg_latency_us": 5027.575034377628, 00:07:53.380 "min_latency_us": 2484.9066666666668, 00:07:53.380 "max_latency_us": 12069.546666666667 00:07:53.380 } 00:07:53.380 ], 00:07:53.380 "core_count": 1 00:07:53.380 } 00:07:53.380 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3334661 00:07:53.380 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3334661 ']' 00:07:53.380 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3334661 00:07:53.380 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:53.380 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:53.380 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3334661 00:07:53.380 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:53.380 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:53.380 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3334661' 00:07:53.380 killing process with pid 3334661 00:07:53.380 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3334661 00:07:53.380 Received shutdown signal, test time was about 10.000000 seconds 00:07:53.380 00:07:53.380 Latency(us) 00:07:53.380 [2024-11-20T06:06:15.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.380 [2024-11-20T06:06:15.658Z] =================================================================================================================== 00:07:53.380 [2024-11-20T06:06:15.658Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:53.380 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3334661 00:07:53.380 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.380 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:53.640 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33526aaa-896a-4501-8fe2-784708753a22 00:07:53.640 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:53.900 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:53.900 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:53.900 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:53.900 [2024-11-20 07:06:16.120668] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:53.900 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33526aaa-896a-4501-8fe2-784708753a22 00:07:53.900 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:53.900 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33526aaa-896a-4501-8fe2-784708753a22 00:07:53.900 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.900 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.900 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.900 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.900 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.900 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.901 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.901 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:53.901 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33526aaa-896a-4501-8fe2-784708753a22 00:07:54.161 request: 00:07:54.161 { 00:07:54.161 "uuid": "33526aaa-896a-4501-8fe2-784708753a22", 00:07:54.161 "method": "bdev_lvol_get_lvstores", 00:07:54.161 "req_id": 1 00:07:54.161 } 00:07:54.161 Got JSON-RPC error response 00:07:54.161 response: 00:07:54.161 { 00:07:54.161 "code": -19, 00:07:54.161 "message": "No such device" 00:07:54.161 } 00:07:54.161 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:54.161 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:54.161 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:54.161 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:54.161 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:54.421 aio_bdev 00:07:54.421 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b64c6237-0e91-4351-ae5c-f19aac5f2004 00:07:54.421 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=b64c6237-0e91-4351-ae5c-f19aac5f2004 00:07:54.421 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:54.421 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:54.421 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:54.421 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:54.422 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:54.422 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b64c6237-0e91-4351-ae5c-f19aac5f2004 -t 2000 00:07:54.682 [ 00:07:54.682 { 00:07:54.682 "name": "b64c6237-0e91-4351-ae5c-f19aac5f2004", 00:07:54.682 "aliases": [ 00:07:54.682 "lvs/lvol" 00:07:54.682 ], 00:07:54.682 "product_name": "Logical Volume", 00:07:54.682 "block_size": 4096, 00:07:54.682 "num_blocks": 38912, 00:07:54.682 "uuid": "b64c6237-0e91-4351-ae5c-f19aac5f2004", 00:07:54.682 "assigned_rate_limits": { 00:07:54.682 "rw_ios_per_sec": 0, 00:07:54.682 "rw_mbytes_per_sec": 0, 00:07:54.682 "r_mbytes_per_sec": 0, 00:07:54.682 "w_mbytes_per_sec": 0 00:07:54.682 }, 00:07:54.682 "claimed": false, 00:07:54.682 "zoned": false, 00:07:54.682 "supported_io_types": { 00:07:54.682 "read": true, 00:07:54.682 "write": true, 00:07:54.682 "unmap": true, 00:07:54.682 "flush": false, 00:07:54.682 "reset": true, 00:07:54.682 "nvme_admin": false, 00:07:54.682 "nvme_io": false, 00:07:54.682 "nvme_io_md": false, 00:07:54.682 "write_zeroes": true, 00:07:54.682 "zcopy": false, 00:07:54.682 "get_zone_info": false, 00:07:54.682 "zone_management": false, 00:07:54.682 "zone_append": false, 00:07:54.682 "compare": false, 00:07:54.682 "compare_and_write": false, 00:07:54.682 "abort": false, 00:07:54.682 "seek_hole": true, 00:07:54.682 "seek_data": true, 00:07:54.682 "copy": false, 00:07:54.682 "nvme_iov_md": false 00:07:54.682 }, 00:07:54.682 "driver_specific": { 00:07:54.682 "lvol": { 00:07:54.682 "lvol_store_uuid": "33526aaa-896a-4501-8fe2-784708753a22", 00:07:54.682 "base_bdev": "aio_bdev", 00:07:54.682 "thin_provision": false, 00:07:54.682 "num_allocated_clusters": 38, 00:07:54.682 "snapshot": false, 00:07:54.682 "clone": false, 00:07:54.682 "esnap_clone": false 00:07:54.682 } 00:07:54.682 } 00:07:54.682 } 00:07:54.682 ] 00:07:54.682 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:54.682 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33526aaa-896a-4501-8fe2-784708753a22 00:07:54.682 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:54.943 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:54.943 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33526aaa-896a-4501-8fe2-784708753a22 00:07:54.943 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:54.943 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:54.943 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b64c6237-0e91-4351-ae5c-f19aac5f2004 00:07:55.205 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 33526aaa-896a-4501-8fe2-784708753a22 00:07:55.465 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:55.466 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.466 00:07:55.466 real 0m15.950s 00:07:55.466 user 0m15.658s 00:07:55.466 sys 0m1.436s 00:07:55.466 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.466 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:55.466 ************************************ 00:07:55.466 END TEST lvs_grow_clean 00:07:55.466 ************************************ 00:07:55.726 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:55.726 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:55.726 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.726 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.726 ************************************ 00:07:55.726 START TEST lvs_grow_dirty 00:07:55.726 ************************************ 00:07:55.726 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:55.726 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:55.726 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:55.726 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:55.726 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:55.726 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:55.726 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:55.726 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.726 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.726 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:55.987 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:55.987 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:55.987 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b20d1549-359a-48fb-a329-c31c6dab3df3 00:07:55.987 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b20d1549-359a-48fb-a329-c31c6dab3df3 00:07:55.987 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:56.247 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:56.247 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:56.248 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b20d1549-359a-48fb-a329-c31c6dab3df3 lvol 150 00:07:56.508 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=595be856-324d-47d7-8c2f-fe23ee052e4d 00:07:56.508 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:56.508 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:56.508 [2024-11-20 07:06:18.693754] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:56.508 [2024-11-20 07:06:18.693795] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:56.508 true 00:07:56.508 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b20d1549-359a-48fb-a329-c31c6dab3df3 00:07:56.508 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:56.768 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:56.768 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:57.028 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 595be856-324d-47d7-8c2f-fe23ee052e4d 00:07:57.028 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:57.288 [2024-11-20 07:06:19.363776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.288 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.548 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3338281 00:07:57.548 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:57.548 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:57.548 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3338281 /var/tmp/bdevperf.sock 00:07:57.548 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3338281 ']' 00:07:57.548 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:57.548 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:57.548 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:57.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:57.548 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:57.548 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:57.548 [2024-11-20 07:06:19.611782] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:07:57.548 [2024-11-20 07:06:19.611833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338281 ] 00:07:57.548 [2024-11-20 07:06:19.695989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.548 [2024-11-20 07:06:19.725555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.488 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:58.488 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:58.488 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:58.488 Nvme0n1 00:07:58.488 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:58.748 [ 00:07:58.748 { 00:07:58.748 "name": "Nvme0n1", 00:07:58.748 "aliases": [ 00:07:58.748 "595be856-324d-47d7-8c2f-fe23ee052e4d" 00:07:58.748 ], 00:07:58.748 "product_name": "NVMe disk", 00:07:58.748 "block_size": 4096, 00:07:58.748 "num_blocks": 38912, 00:07:58.748 "uuid": "595be856-324d-47d7-8c2f-fe23ee052e4d", 00:07:58.748 "numa_id": 0, 00:07:58.748 "assigned_rate_limits": { 00:07:58.748 "rw_ios_per_sec": 0, 00:07:58.748 "rw_mbytes_per_sec": 0, 00:07:58.748 "r_mbytes_per_sec": 0, 00:07:58.748 "w_mbytes_per_sec": 0 00:07:58.748 }, 00:07:58.748 "claimed": false, 00:07:58.748 "zoned": false, 00:07:58.748 "supported_io_types": { 00:07:58.748 "read": true, 00:07:58.748 "write": true, 00:07:58.748 "unmap": true, 00:07:58.748 "flush": true, 00:07:58.748 "reset": true, 00:07:58.748 "nvme_admin": true, 00:07:58.748 "nvme_io": true, 00:07:58.748 "nvme_io_md": false, 00:07:58.748 "write_zeroes": true, 00:07:58.748 "zcopy": false, 00:07:58.748 "get_zone_info": false, 00:07:58.748 "zone_management": false, 00:07:58.748 "zone_append": false, 00:07:58.748 "compare": true, 00:07:58.748 "compare_and_write": true, 00:07:58.748 "abort": true, 00:07:58.749 "seek_hole": false, 00:07:58.749 "seek_data": false, 00:07:58.749 "copy": true, 00:07:58.749 "nvme_iov_md": false 00:07:58.749 }, 00:07:58.749 "memory_domains": [ 00:07:58.749 { 00:07:58.749 "dma_device_id": "system", 00:07:58.749 "dma_device_type": 1 00:07:58.749 } 00:07:58.749 ], 00:07:58.749 "driver_specific": { 00:07:58.749 "nvme": [ 00:07:58.749 { 00:07:58.749 "trid": { 00:07:58.749 "trtype": "TCP", 00:07:58.749 "adrfam": "IPv4", 00:07:58.749 "traddr": "10.0.0.2", 00:07:58.749 "trsvcid": "4420", 00:07:58.749 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:58.749 }, 00:07:58.749 "ctrlr_data": { 00:07:58.749 "cntlid": 1, 00:07:58.749 "vendor_id": "0x8086", 00:07:58.749 "model_number": "SPDK bdev Controller", 00:07:58.749 "serial_number": "SPDK0", 00:07:58.749 "firmware_revision": "25.01", 00:07:58.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:58.749 "oacs": { 00:07:58.749 "security": 0, 00:07:58.749 "format": 0, 00:07:58.749 "firmware": 0, 00:07:58.749 "ns_manage": 0 00:07:58.749 }, 00:07:58.749 "multi_ctrlr": true, 00:07:58.749 "ana_reporting": false 00:07:58.749 }, 00:07:58.749 "vs": { 00:07:58.749 "nvme_version": "1.3" 00:07:58.749 }, 00:07:58.749 "ns_data": { 00:07:58.749 "id": 1, 00:07:58.749 "can_share": true 00:07:58.749 } 00:07:58.749 } 00:07:58.749 ], 00:07:58.749 "mp_policy": "active_passive" 00:07:58.749 } 00:07:58.749 } 00:07:58.749 ] 00:07:58.749 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3338579 00:07:58.749 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:58.749 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:58.749 Running I/O for 10 seconds... 00:07:59.738 Latency(us) 00:07:59.738 [2024-11-20T06:06:22.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.738 Nvme0n1 : 1.00 25172.00 98.33 0.00 0.00 0.00 0.00 0.00 00:07:59.738 [2024-11-20T06:06:22.016Z] =================================================================================================================== 00:07:59.738 [2024-11-20T06:06:22.016Z] Total : 25172.00 98.33 0.00 0.00 0.00 0.00 0.00 00:07:59.738 00:08:00.743 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b20d1549-359a-48fb-a329-c31c6dab3df3 00:08:00.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.743 Nvme0n1 : 2.00 25289.00 98.79 0.00 0.00 0.00 0.00 0.00 00:08:00.743 [2024-11-20T06:06:23.021Z] =================================================================================================================== 00:08:00.743 [2024-11-20T06:06:23.021Z] Total : 25289.00 98.79 0.00 0.00 0.00 0.00 0.00 00:08:00.743 00:08:00.743 true 00:08:00.743 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b20d1549-359a-48fb-a329-c31c6dab3df3 00:08:00.743 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:01.005 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:01.005 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:01.005 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3338579 00:08:01.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.946 Nvme0n1 : 3.00 25342.33 98.99 0.00 0.00 0.00 0.00 0.00 00:08:01.946 [2024-11-20T06:06:24.224Z] =================================================================================================================== 00:08:01.946 [2024-11-20T06:06:24.224Z] Total : 25342.33 98.99 0.00 0.00 0.00 0.00 0.00 00:08:01.946 00:08:02.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.887 Nvme0n1 : 4.00 25374.25 99.12 0.00 0.00 0.00 0.00 0.00 00:08:02.887 [2024-11-20T06:06:25.165Z] =================================================================================================================== 00:08:02.887 [2024-11-20T06:06:25.165Z] Total : 25374.25 99.12 0.00 0.00 0.00 0.00 0.00 00:08:02.887 00:08:03.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.828 Nvme0n1 : 5.00 25405.80 99.24 0.00 0.00 0.00 0.00 0.00 00:08:03.828 [2024-11-20T06:06:26.106Z] =================================================================================================================== 00:08:03.828 [2024-11-20T06:06:26.106Z] Total : 25405.80 99.24 0.00 0.00 0.00 0.00 0.00 00:08:03.828 00:08:04.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.768 Nvme0n1 : 6.00 25427.50 99.33 0.00 0.00 0.00 0.00 0.00 00:08:04.768 [2024-11-20T06:06:27.046Z] =================================================================================================================== 00:08:04.768 [2024-11-20T06:06:27.046Z] Total : 25427.50 99.33 0.00 0.00 0.00 0.00 0.00 00:08:04.769 00:08:05.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.710 Nvme0n1 : 7.00 25443.00 99.39 0.00 0.00 0.00 0.00 0.00 00:08:05.710 [2024-11-20T06:06:27.988Z] =================================================================================================================== 00:08:05.710 [2024-11-20T06:06:27.988Z] Total : 25443.00 99.39 0.00 0.00 0.00 0.00 0.00 00:08:05.710 00:08:06.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.652 Nvme0n1 : 8.00 25462.50 99.46 0.00 0.00 0.00 0.00 0.00 00:08:06.652 [2024-11-20T06:06:28.930Z] =================================================================================================================== 00:08:06.652 [2024-11-20T06:06:28.930Z] Total : 25462.50 99.46 0.00 0.00 0.00 0.00 0.00 00:08:06.652 00:08:08.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.038 Nvme0n1 : 9.00 25470.78 99.50 0.00 0.00 0.00 0.00 0.00 00:08:08.038 [2024-11-20T06:06:30.316Z] =================================================================================================================== 00:08:08.038 [2024-11-20T06:06:30.316Z] Total : 25470.78 99.50 0.00 0.00 0.00 0.00 0.00 00:08:08.038 00:08:08.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.979 Nvme0n1 : 10.00 25477.20 99.52 0.00 0.00 0.00 0.00 0.00 00:08:08.979 [2024-11-20T06:06:31.257Z] =================================================================================================================== 00:08:08.979 [2024-11-20T06:06:31.257Z] Total : 25477.20 99.52 0.00 0.00 0.00 0.00 0.00 00:08:08.979 00:08:08.979 00:08:08.979 Latency(us) 00:08:08.979 [2024-11-20T06:06:31.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.979 Nvme0n1 : 10.00 25479.82 99.53 0.00 0.00 5020.47 1536.00 8738.13 00:08:08.979 [2024-11-20T06:06:31.257Z] =================================================================================================================== 00:08:08.979 [2024-11-20T06:06:31.257Z] Total : 25479.82 99.53 0.00 0.00 5020.47 1536.00 8738.13 00:08:08.979 { 00:08:08.979 "results": [ 00:08:08.979 { 00:08:08.979 "job": "Nvme0n1", 00:08:08.979 "core_mask": "0x2", 00:08:08.979 "workload": "randwrite", 00:08:08.979 "status": "finished", 00:08:08.979 "queue_depth": 128, 00:08:08.979 "io_size": 4096, 00:08:08.979 "runtime": 10.003994, 00:08:08.979 "iops": 25479.823358550595, 00:08:08.979 "mibps": 99.53055999433826, 00:08:08.979 "io_failed": 0, 00:08:08.979 "io_timeout": 0, 00:08:08.979 "avg_latency_us": 5020.471471688244, 00:08:08.979 "min_latency_us": 1536.0, 00:08:08.979 "max_latency_us": 8738.133333333333 00:08:08.979 } 00:08:08.979 ], 00:08:08.979 "core_count": 1 00:08:08.979 } 00:08:08.979 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3338281 00:08:08.979 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3338281 ']' 00:08:08.979 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3338281 00:08:08.979 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:08:08.979 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:08.979 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3338281 00:08:08.979 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:08.979 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:08.979 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3338281' 00:08:08.979 killing process with pid 3338281 00:08:08.979 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3338281 00:08:08.979 Received shutdown signal, test time was about 10.000000 seconds 00:08:08.979 00:08:08.979 Latency(us) 00:08:08.979 [2024-11-20T06:06:31.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.979 [2024-11-20T06:06:31.257Z] =================================================================================================================== 00:08:08.979 [2024-11-20T06:06:31.257Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:08.979 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3338281 00:08:08.980 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.240 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:09.240 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b20d1549-359a-48fb-a329-c31c6dab3df3 00:08:09.240 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3333936 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3333936 00:08:09.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3333936 Killed "${NVMF_APP[@]}" "$@" 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3340654 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3340654 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3340654 ']' 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.501 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.501 [2024-11-20 07:06:31.706894] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:08:09.501 [2024-11-20 07:06:31.706977] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.761 [2024-11-20 07:06:31.797273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.761 [2024-11-20 07:06:31.827730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.761 [2024-11-20 07:06:31.827757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.761 [2024-11-20 07:06:31.827763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.761 [2024-11-20 07:06:31.827767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.761 [2024-11-20 07:06:31.827771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.761 [2024-11-20 07:06:31.828227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.333 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:10.333 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:10.333 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:10.333 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.333 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:10.333 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.333 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:10.595 [2024-11-20 07:06:32.674056] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:10.595 [2024-11-20 07:06:32.674135] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:10.595 [2024-11-20 07:06:32.674164] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:10.595 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:10.595 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 595be856-324d-47d7-8c2f-fe23ee052e4d 00:08:10.595 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=595be856-324d-47d7-8c2f-fe23ee052e4d 00:08:10.595 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:10.595 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:10.595 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:10.595 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:10.595 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:10.595 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 595be856-324d-47d7-8c2f-fe23ee052e4d -t 2000 00:08:10.855 [ 00:08:10.855 { 00:08:10.855 "name": "595be856-324d-47d7-8c2f-fe23ee052e4d", 00:08:10.855 "aliases": [ 00:08:10.855 "lvs/lvol" 00:08:10.855 ], 00:08:10.855 "product_name": "Logical Volume", 00:08:10.855 "block_size": 4096, 00:08:10.855 "num_blocks": 38912, 00:08:10.855 "uuid": "595be856-324d-47d7-8c2f-fe23ee052e4d", 00:08:10.855 "assigned_rate_limits": { 00:08:10.855 "rw_ios_per_sec": 0, 00:08:10.856 "rw_mbytes_per_sec": 0, 00:08:10.856 "r_mbytes_per_sec": 0, 00:08:10.856 "w_mbytes_per_sec": 0 00:08:10.856 }, 00:08:10.856 "claimed": false, 00:08:10.856 "zoned": false, 00:08:10.856 "supported_io_types": { 00:08:10.856 "read": true, 00:08:10.856 "write": true, 00:08:10.856 "unmap": true, 00:08:10.856 "flush": false, 00:08:10.856 "reset": true, 00:08:10.856 "nvme_admin": false, 00:08:10.856 "nvme_io": false, 00:08:10.856 "nvme_io_md": false, 00:08:10.856 "write_zeroes": true, 00:08:10.856 "zcopy": false, 00:08:10.856 "get_zone_info": false, 00:08:10.856 "zone_management": false, 00:08:10.856 "zone_append": false, 00:08:10.856 "compare": false, 00:08:10.856 "compare_and_write": false, 00:08:10.856 "abort": false, 00:08:10.856 "seek_hole": true, 00:08:10.856 "seek_data": true, 00:08:10.856 "copy": false, 00:08:10.856 "nvme_iov_md": false 00:08:10.856 }, 00:08:10.856 "driver_specific": { 00:08:10.856 "lvol": { 00:08:10.856 "lvol_store_uuid": "b20d1549-359a-48fb-a329-c31c6dab3df3", 00:08:10.856 "base_bdev": "aio_bdev", 00:08:10.856 "thin_provision": false, 00:08:10.856 "num_allocated_clusters": 38, 00:08:10.856 "snapshot": false, 00:08:10.856 "clone": false, 00:08:10.856 "esnap_clone": false 00:08:10.856 } 00:08:10.856 } 00:08:10.856 } 00:08:10.856 ] 00:08:10.856 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:10.856 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b20d1549-359a-48fb-a329-c31c6dab3df3 00:08:10.856 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:11.116 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:11.116 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b20d1549-359a-48fb-a329-c31c6dab3df3 00:08:11.116 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:11.116 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:11.116 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:11.376 [2024-11-20 07:06:33.486594] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:11.376 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b20d1549-359a-48fb-a329-c31c6dab3df3 00:08:11.376 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:11.376 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b20d1549-359a-48fb-a329-c31c6dab3df3 00:08:11.376 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.376 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.376 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.376 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.376 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.376 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.376 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.376 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:11.376 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b20d1549-359a-48fb-a329-c31c6dab3df3 00:08:11.638 request: 00:08:11.638 { 00:08:11.638 "uuid": "b20d1549-359a-48fb-a329-c31c6dab3df3", 00:08:11.638 "method": "bdev_lvol_get_lvstores", 00:08:11.638 "req_id": 1 00:08:11.638 } 00:08:11.638 Got JSON-RPC error response 00:08:11.638 response: 00:08:11.638 { 00:08:11.638 "code": -19, 00:08:11.638 "message": "No such device" 00:08:11.638 } 00:08:11.638 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:11.638 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.638 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:11.638 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.638 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.638 aio_bdev 00:08:11.638 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 595be856-324d-47d7-8c2f-fe23ee052e4d 00:08:11.638 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=595be856-324d-47d7-8c2f-fe23ee052e4d 00:08:11.638 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:11.638 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:11.638 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:11.638 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:11.638 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:11.898 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 595be856-324d-47d7-8c2f-fe23ee052e4d -t 2000 00:08:12.159 [ 00:08:12.159 { 00:08:12.159 "name": "595be856-324d-47d7-8c2f-fe23ee052e4d", 00:08:12.159 "aliases": [ 00:08:12.159 "lvs/lvol" 00:08:12.159 ], 00:08:12.159 "product_name": "Logical Volume", 00:08:12.159 "block_size": 4096, 00:08:12.159 "num_blocks": 38912, 00:08:12.159 "uuid": "595be856-324d-47d7-8c2f-fe23ee052e4d", 00:08:12.159 "assigned_rate_limits": { 00:08:12.159 "rw_ios_per_sec": 0, 00:08:12.159 "rw_mbytes_per_sec": 0, 00:08:12.159 "r_mbytes_per_sec": 0, 00:08:12.159 "w_mbytes_per_sec": 0 00:08:12.159 }, 00:08:12.159 "claimed": false, 00:08:12.159 "zoned": false, 00:08:12.159 "supported_io_types": { 00:08:12.159 "read": true, 00:08:12.159 "write": true, 00:08:12.159 "unmap": true, 00:08:12.159 "flush": false, 00:08:12.159 "reset": true, 00:08:12.159 "nvme_admin": false, 00:08:12.159 "nvme_io": false, 00:08:12.159 "nvme_io_md": false, 00:08:12.159 "write_zeroes": true, 00:08:12.159 "zcopy": false, 00:08:12.159 "get_zone_info": false, 00:08:12.159 "zone_management": false, 00:08:12.159 "zone_append": false, 00:08:12.159 "compare": false, 00:08:12.159 "compare_and_write": false, 00:08:12.159 "abort": false, 00:08:12.159 "seek_hole": true, 00:08:12.159 "seek_data": true, 00:08:12.159 "copy": false, 00:08:12.159 "nvme_iov_md": false 00:08:12.159 }, 00:08:12.159 "driver_specific": { 00:08:12.159 "lvol": { 00:08:12.159 "lvol_store_uuid": "b20d1549-359a-48fb-a329-c31c6dab3df3", 00:08:12.159 "base_bdev": "aio_bdev", 00:08:12.159 "thin_provision": false, 00:08:12.159 "num_allocated_clusters": 38, 00:08:12.159 "snapshot": false, 00:08:12.159 "clone": false, 00:08:12.159 "esnap_clone": false 00:08:12.159 } 00:08:12.159 } 00:08:12.159 } 00:08:12.159 ] 00:08:12.159 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:12.159 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b20d1549-359a-48fb-a329-c31c6dab3df3 00:08:12.159 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:12.159 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:12.160 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b20d1549-359a-48fb-a329-c31c6dab3df3 00:08:12.160 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:12.420 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:12.420 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 595be856-324d-47d7-8c2f-fe23ee052e4d 00:08:12.420 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b20d1549-359a-48fb-a329-c31c6dab3df3 00:08:12.680 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:12.940 00:08:12.940 real 0m17.240s 00:08:12.940 user 0m45.617s 00:08:12.940 sys 0m2.976s 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:12.940 ************************************ 00:08:12.940 END TEST lvs_grow_dirty 00:08:12.940 ************************************ 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:12.940 nvmf_trace.0 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:12.940 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:12.940 rmmod nvme_tcp 00:08:12.940 rmmod nvme_fabrics 00:08:12.940 rmmod nvme_keyring 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3340654 ']' 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3340654 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3340654 ']' 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3340654 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3340654 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3340654' 00:08:13.201 killing process with pid 3340654 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3340654 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3340654 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.201 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:15.750 00:08:15.750 real 0m44.571s 00:08:15.750 user 1m7.628s 00:08:15.750 sys 0m10.484s 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.750 ************************************ 00:08:15.750 END TEST nvmf_lvs_grow 00:08:15.750 ************************************ 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.750 ************************************ 00:08:15.750 START TEST nvmf_bdev_io_wait 00:08:15.750 ************************************ 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:15.750 * Looking for test storage... 00:08:15.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:15.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.750 --rc genhtml_branch_coverage=1 00:08:15.750 --rc genhtml_function_coverage=1 00:08:15.750 --rc genhtml_legend=1 00:08:15.750 --rc geninfo_all_blocks=1 00:08:15.750 --rc geninfo_unexecuted_blocks=1 00:08:15.750 00:08:15.750 ' 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:15.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.750 --rc genhtml_branch_coverage=1 00:08:15.750 --rc genhtml_function_coverage=1 00:08:15.750 --rc genhtml_legend=1 00:08:15.750 --rc geninfo_all_blocks=1 00:08:15.750 --rc geninfo_unexecuted_blocks=1 00:08:15.750 00:08:15.750 ' 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:15.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.750 --rc genhtml_branch_coverage=1 00:08:15.750 --rc genhtml_function_coverage=1 00:08:15.750 --rc genhtml_legend=1 00:08:15.750 --rc geninfo_all_blocks=1 00:08:15.750 --rc geninfo_unexecuted_blocks=1 00:08:15.750 00:08:15.750 ' 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:15.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.750 --rc genhtml_branch_coverage=1 00:08:15.750 --rc genhtml_function_coverage=1 00:08:15.750 --rc genhtml_legend=1 00:08:15.750 --rc geninfo_all_blocks=1 00:08:15.750 --rc geninfo_unexecuted_blocks=1 00:08:15.750 00:08:15.750 ' 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.750 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:15.751 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:23.903 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:23.903 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:23.903 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:23.903 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.903 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.903 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:23.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:08:23.904 00:08:23.904 --- 10.0.0.2 ping statistics --- 00:08:23.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.904 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:08:23.904 00:08:23.904 --- 10.0.0.1 ping statistics --- 00:08:23.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.904 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3345731 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3345731 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3345731 ']' 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:23.904 07:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.904 [2024-11-20 07:06:45.354139] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:08:23.904 [2024-11-20 07:06:45.354215] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.904 [2024-11-20 07:06:45.452882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.904 [2024-11-20 07:06:45.507223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.904 [2024-11-20 07:06:45.507274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.904 [2024-11-20 07:06:45.507283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.904 [2024-11-20 07:06:45.507290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.904 [2024-11-20 07:06:45.507297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.904 [2024-11-20 07:06:45.509648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.904 [2024-11-20 07:06:45.509809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.904 [2024-11-20 07:06:45.509971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.904 [2024-11-20 07:06:45.509971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.166 [2024-11-20 07:06:46.305306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.166 Malloc0 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.166 [2024-11-20 07:06:46.370869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3345934 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3345937 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.166 { 00:08:24.166 "params": { 00:08:24.166 "name": "Nvme$subsystem", 00:08:24.166 "trtype": "$TEST_TRANSPORT", 00:08:24.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.166 "adrfam": "ipv4", 00:08:24.166 "trsvcid": "$NVMF_PORT", 00:08:24.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.166 "hdgst": ${hdgst:-false}, 00:08:24.166 "ddgst": ${ddgst:-false} 00:08:24.166 }, 00:08:24.166 "method": "bdev_nvme_attach_controller" 00:08:24.166 } 00:08:24.166 EOF 00:08:24.166 )") 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3345940 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.166 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.166 { 00:08:24.166 "params": { 00:08:24.166 "name": "Nvme$subsystem", 00:08:24.166 "trtype": "$TEST_TRANSPORT", 00:08:24.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.166 "adrfam": "ipv4", 00:08:24.167 "trsvcid": "$NVMF_PORT", 00:08:24.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.167 "hdgst": ${hdgst:-false}, 00:08:24.167 "ddgst": ${ddgst:-false} 00:08:24.167 }, 00:08:24.167 "method": "bdev_nvme_attach_controller" 00:08:24.167 } 00:08:24.167 EOF 00:08:24.167 )") 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3345944 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.167 { 00:08:24.167 "params": { 00:08:24.167 "name": "Nvme$subsystem", 00:08:24.167 "trtype": "$TEST_TRANSPORT", 00:08:24.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.167 "adrfam": "ipv4", 00:08:24.167 "trsvcid": "$NVMF_PORT", 00:08:24.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.167 "hdgst": ${hdgst:-false}, 00:08:24.167 "ddgst": ${ddgst:-false} 00:08:24.167 }, 00:08:24.167 "method": "bdev_nvme_attach_controller" 00:08:24.167 } 00:08:24.167 EOF 00:08:24.167 )") 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.167 { 00:08:24.167 "params": { 00:08:24.167 "name": "Nvme$subsystem", 00:08:24.167 "trtype": "$TEST_TRANSPORT", 00:08:24.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.167 "adrfam": "ipv4", 00:08:24.167 "trsvcid": "$NVMF_PORT", 00:08:24.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.167 "hdgst": ${hdgst:-false}, 00:08:24.167 "ddgst": ${ddgst:-false} 00:08:24.167 }, 00:08:24.167 "method": "bdev_nvme_attach_controller" 00:08:24.167 } 00:08:24.167 EOF 00:08:24.167 )") 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3345934 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.167 "params": { 00:08:24.167 "name": "Nvme1", 00:08:24.167 "trtype": "tcp", 00:08:24.167 "traddr": "10.0.0.2", 00:08:24.167 "adrfam": "ipv4", 00:08:24.167 "trsvcid": "4420", 00:08:24.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.167 "hdgst": false, 00:08:24.167 "ddgst": false 00:08:24.167 }, 00:08:24.167 "method": "bdev_nvme_attach_controller" 00:08:24.167 }' 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.167 "params": { 00:08:24.167 "name": "Nvme1", 00:08:24.167 "trtype": "tcp", 00:08:24.167 "traddr": "10.0.0.2", 00:08:24.167 "adrfam": "ipv4", 00:08:24.167 "trsvcid": "4420", 00:08:24.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.167 "hdgst": false, 00:08:24.167 "ddgst": false 00:08:24.167 }, 00:08:24.167 "method": "bdev_nvme_attach_controller" 00:08:24.167 }' 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.167 "params": { 00:08:24.167 "name": "Nvme1", 00:08:24.167 "trtype": "tcp", 00:08:24.167 "traddr": "10.0.0.2", 00:08:24.167 "adrfam": "ipv4", 00:08:24.167 "trsvcid": "4420", 00:08:24.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.167 "hdgst": false, 00:08:24.167 "ddgst": false 00:08:24.167 }, 00:08:24.167 "method": "bdev_nvme_attach_controller" 00:08:24.167 }' 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:24.167 07:06:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.167 "params": { 00:08:24.167 "name": "Nvme1", 00:08:24.167 "trtype": "tcp", 00:08:24.167 "traddr": "10.0.0.2", 00:08:24.167 "adrfam": "ipv4", 00:08:24.167 "trsvcid": "4420", 00:08:24.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.167 "hdgst": false, 00:08:24.167 "ddgst": false 00:08:24.167 }, 00:08:24.167 "method": "bdev_nvme_attach_controller" 00:08:24.167 }' 00:08:24.167 [2024-11-20 07:06:46.431353] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:08:24.167 [2024-11-20 07:06:46.431434] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:24.167 [2024-11-20 07:06:46.432488] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:08:24.167 [2024-11-20 07:06:46.432562] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:24.167 [2024-11-20 07:06:46.434799] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:08:24.167 [2024-11-20 07:06:46.434862] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:24.167 [2024-11-20 07:06:46.438660] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:08:24.167 [2024-11-20 07:06:46.438741] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:24.428 [2024-11-20 07:06:46.664142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.428 [2024-11-20 07:06:46.702799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:24.689 [2024-11-20 07:06:46.728297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.689 [2024-11-20 07:06:46.767361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:24.689 [2024-11-20 07:06:46.795220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.689 [2024-11-20 07:06:46.834734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:24.689 [2024-11-20 07:06:46.887346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.689 [2024-11-20 07:06:46.927192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:24.949 Running I/O for 1 seconds... 00:08:24.949 Running I/O for 1 seconds... 00:08:24.949 Running I/O for 1 seconds... 00:08:24.949 Running I/O for 1 seconds... 00:08:25.891 11284.00 IOPS, 44.08 MiB/s 00:08:25.891 Latency(us) 00:08:25.891 [2024-11-20T06:06:48.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.891 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:25.892 Nvme1n1 : 1.01 11346.10 44.32 0.00 0.00 11246.39 4642.13 17913.17 00:08:25.892 [2024-11-20T06:06:48.170Z] =================================================================================================================== 00:08:25.892 [2024-11-20T06:06:48.170Z] Total : 11346.10 44.32 0.00 0.00 11246.39 4642.13 17913.17 00:08:25.892 189136.00 IOPS, 738.81 MiB/s 00:08:25.892 Latency(us) 00:08:25.892 [2024-11-20T06:06:48.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.892 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:25.892 Nvme1n1 : 1.00 188755.73 737.33 0.00 0.00 674.48 300.37 1993.39 00:08:25.892 [2024-11-20T06:06:48.170Z] =================================================================================================================== 00:08:25.892 [2024-11-20T06:06:48.170Z] Total : 188755.73 737.33 0.00 0.00 674.48 300.37 1993.39 00:08:25.892 8780.00 IOPS, 34.30 MiB/s 00:08:25.892 Latency(us) 00:08:25.892 [2024-11-20T06:06:48.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.892 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:25.892 Nvme1n1 : 1.01 8850.37 34.57 0.00 0.00 14405.69 6335.15 24576.00 00:08:25.892 [2024-11-20T06:06:48.170Z] =================================================================================================================== 00:08:25.892 [2024-11-20T06:06:48.170Z] Total : 8850.37 34.57 0.00 0.00 14405.69 6335.15 24576.00 00:08:25.892 10127.00 IOPS, 39.56 MiB/s 00:08:25.892 Latency(us) 00:08:25.892 [2024-11-20T06:06:48.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.892 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:25.892 Nvme1n1 : 1.01 10214.65 39.90 0.00 0.00 12489.91 4751.36 24576.00 00:08:25.892 [2024-11-20T06:06:48.170Z] =================================================================================================================== 00:08:25.892 [2024-11-20T06:06:48.170Z] Total : 10214.65 39.90 0.00 0.00 12489.91 4751.36 24576.00 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3345937 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3345940 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3345944 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:26.153 rmmod nvme_tcp 00:08:26.153 rmmod nvme_fabrics 00:08:26.153 rmmod nvme_keyring 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3345731 ']' 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3345731 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3345731 ']' 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3345731 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3345731 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3345731' 00:08:26.153 killing process with pid 3345731 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3345731 00:08:26.153 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3345731 00:08:26.413 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:26.413 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:26.413 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:26.413 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:26.413 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:26.413 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:26.413 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:26.413 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:26.413 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:26.413 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.413 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.413 07:06:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.329 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:28.329 00:08:28.329 real 0m13.005s 00:08:28.329 user 0m19.370s 00:08:28.329 sys 0m7.475s 00:08:28.329 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:28.329 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.329 ************************************ 00:08:28.329 END TEST nvmf_bdev_io_wait 00:08:28.329 ************************************ 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.591 ************************************ 00:08:28.591 START TEST nvmf_queue_depth 00:08:28.591 ************************************ 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:28.591 * Looking for test storage... 00:08:28.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:28.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.591 --rc genhtml_branch_coverage=1 00:08:28.591 --rc genhtml_function_coverage=1 00:08:28.591 --rc genhtml_legend=1 00:08:28.591 --rc geninfo_all_blocks=1 00:08:28.591 --rc geninfo_unexecuted_blocks=1 00:08:28.591 00:08:28.591 ' 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:28.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.591 --rc genhtml_branch_coverage=1 00:08:28.591 --rc genhtml_function_coverage=1 00:08:28.591 --rc genhtml_legend=1 00:08:28.591 --rc geninfo_all_blocks=1 00:08:28.591 --rc geninfo_unexecuted_blocks=1 00:08:28.591 00:08:28.591 ' 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:28.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.591 --rc genhtml_branch_coverage=1 00:08:28.591 --rc genhtml_function_coverage=1 00:08:28.591 --rc genhtml_legend=1 00:08:28.591 --rc geninfo_all_blocks=1 00:08:28.591 --rc geninfo_unexecuted_blocks=1 00:08:28.591 00:08:28.591 ' 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:28.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.591 --rc genhtml_branch_coverage=1 00:08:28.591 --rc genhtml_function_coverage=1 00:08:28.591 --rc genhtml_legend=1 00:08:28.591 --rc geninfo_all_blocks=1 00:08:28.591 --rc geninfo_unexecuted_blocks=1 00:08:28.591 00:08:28.591 ' 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.591 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:28.854 07:06:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.998 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.998 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:36.998 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:36.998 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:36.998 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:36.998 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:36.998 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:36.998 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:36.999 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:36.999 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:36.999 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:36.999 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:36.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:08:36.999 00:08:36.999 --- 10.0.0.2 ping statistics --- 00:08:36.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.999 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:08:36.999 00:08:36.999 --- 10.0.0.1 ping statistics --- 00:08:36.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.999 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3350507 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3350507 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3350507 ']' 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.999 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.999 [2024-11-20 07:06:58.465246] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:08:36.999 [2024-11-20 07:06:58.465318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.999 [2024-11-20 07:06:58.566035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.999 [2024-11-20 07:06:58.616353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.999 [2024-11-20 07:06:58.616405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.999 [2024-11-20 07:06:58.616413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.999 [2024-11-20 07:06:58.616420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.999 [2024-11-20 07:06:58.616426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.999 [2024-11-20 07:06:58.617195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.999 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:36.999 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:36.999 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:36.999 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:36.999 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.259 [2024-11-20 07:06:59.319000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.259 Malloc0 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.259 [2024-11-20 07:06:59.380131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3350812 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3350812 /var/tmp/bdevperf.sock 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3350812 ']' 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:37.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:37.259 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.259 [2024-11-20 07:06:59.438304] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:08:37.259 [2024-11-20 07:06:59.438373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350812 ] 00:08:37.259 [2024-11-20 07:06:59.528595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.519 [2024-11-20 07:06:59.581543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.091 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:38.091 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:38.091 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:38.091 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.091 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.352 NVMe0n1 00:08:38.352 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.352 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:38.352 Running I/O for 10 seconds... 00:08:40.236 8509.00 IOPS, 33.24 MiB/s [2024-11-20T06:07:03.896Z] 9827.50 IOPS, 38.39 MiB/s [2024-11-20T06:07:04.837Z] 10530.00 IOPS, 41.13 MiB/s [2024-11-20T06:07:05.777Z] 11006.00 IOPS, 42.99 MiB/s [2024-11-20T06:07:06.717Z] 11467.00 IOPS, 44.79 MiB/s [2024-11-20T06:07:07.657Z] 11773.33 IOPS, 45.99 MiB/s [2024-11-20T06:07:08.601Z] 12019.43 IOPS, 46.95 MiB/s [2024-11-20T06:07:09.542Z] 12175.00 IOPS, 47.56 MiB/s [2024-11-20T06:07:10.926Z] 12305.78 IOPS, 48.07 MiB/s [2024-11-20T06:07:10.926Z] 12409.90 IOPS, 48.48 MiB/s 00:08:48.648 Latency(us) 00:08:48.648 [2024-11-20T06:07:10.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.648 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:48.648 Verification LBA range: start 0x0 length 0x4000 00:08:48.648 NVMe0n1 : 10.04 12454.61 48.65 0.00 0.00 81931.64 5324.80 71652.69 00:08:48.648 [2024-11-20T06:07:10.926Z] =================================================================================================================== 00:08:48.648 [2024-11-20T06:07:10.926Z] Total : 12454.61 48.65 0.00 0.00 81931.64 5324.80 71652.69 00:08:48.648 { 00:08:48.648 "results": [ 00:08:48.648 { 00:08:48.648 "job": "NVMe0n1", 00:08:48.648 "core_mask": "0x1", 00:08:48.648 "workload": "verify", 00:08:48.648 "status": "finished", 00:08:48.648 "verify_range": { 00:08:48.648 "start": 0, 00:08:48.648 "length": 16384 00:08:48.648 }, 00:08:48.648 "queue_depth": 1024, 00:08:48.648 "io_size": 4096, 00:08:48.648 "runtime": 10.039816, 00:08:48.648 "iops": 12454.610721949486, 00:08:48.648 "mibps": 48.65082313261518, 00:08:48.648 "io_failed": 0, 00:08:48.648 "io_timeout": 0, 00:08:48.648 "avg_latency_us": 81931.63976093366, 00:08:48.648 "min_latency_us": 5324.8, 00:08:48.648 "max_latency_us": 71652.69333333333 00:08:48.648 } 00:08:48.648 ], 00:08:48.648 "core_count": 1 00:08:48.648 } 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3350812 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3350812 ']' 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3350812 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3350812 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3350812' 00:08:48.648 killing process with pid 3350812 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3350812 00:08:48.648 Received shutdown signal, test time was about 10.000000 seconds 00:08:48.648 00:08:48.648 Latency(us) 00:08:48.648 [2024-11-20T06:07:10.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.648 [2024-11-20T06:07:10.926Z] =================================================================================================================== 00:08:48.648 [2024-11-20T06:07:10.926Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3350812 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:48.648 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.649 rmmod nvme_tcp 00:08:48.649 rmmod nvme_fabrics 00:08:48.649 rmmod nvme_keyring 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3350507 ']' 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3350507 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3350507 ']' 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3350507 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3350507 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3350507' 00:08:48.649 killing process with pid 3350507 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3350507 00:08:48.649 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3350507 00:08:48.909 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.909 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.909 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.909 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:48.909 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:48.909 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:48.909 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:48.909 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.909 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.909 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.909 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.909 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.818 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:50.818 00:08:50.818 real 0m22.407s 00:08:50.818 user 0m25.623s 00:08:50.818 sys 0m7.051s 00:08:50.818 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.818 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:50.818 ************************************ 00:08:50.818 END TEST nvmf_queue_depth 00:08:50.818 ************************************ 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.078 ************************************ 00:08:51.078 START TEST nvmf_target_multipath 00:08:51.078 ************************************ 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:51.078 * Looking for test storage... 00:08:51.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:51.078 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:51.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.079 --rc genhtml_branch_coverage=1 00:08:51.079 --rc genhtml_function_coverage=1 00:08:51.079 --rc genhtml_legend=1 00:08:51.079 --rc geninfo_all_blocks=1 00:08:51.079 --rc geninfo_unexecuted_blocks=1 00:08:51.079 00:08:51.079 ' 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:51.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.079 --rc genhtml_branch_coverage=1 00:08:51.079 --rc genhtml_function_coverage=1 00:08:51.079 --rc genhtml_legend=1 00:08:51.079 --rc geninfo_all_blocks=1 00:08:51.079 --rc geninfo_unexecuted_blocks=1 00:08:51.079 00:08:51.079 ' 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:51.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.079 --rc genhtml_branch_coverage=1 00:08:51.079 --rc genhtml_function_coverage=1 00:08:51.079 --rc genhtml_legend=1 00:08:51.079 --rc geninfo_all_blocks=1 00:08:51.079 --rc geninfo_unexecuted_blocks=1 00:08:51.079 00:08:51.079 ' 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:51.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.079 --rc genhtml_branch_coverage=1 00:08:51.079 --rc genhtml_function_coverage=1 00:08:51.079 --rc genhtml_legend=1 00:08:51.079 --rc geninfo_all_blocks=1 00:08:51.079 --rc geninfo_unexecuted_blocks=1 00:08:51.079 00:08:51.079 ' 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.079 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.339 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.340 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:51.340 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:51.340 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:51.340 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.549 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:59.550 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:59.550 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:59.550 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:59.550 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:08:59.550 00:08:59.550 --- 10.0.0.2 ping statistics --- 00:08:59.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.550 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:08:59.550 00:08:59.550 --- 10.0.0.1 ping statistics --- 00:08:59.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.550 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.550 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:59.551 only one NIC for nvmf test 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.551 rmmod nvme_tcp 00:08:59.551 rmmod nvme_fabrics 00:08:59.551 rmmod nvme_keyring 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.551 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:00.934 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:00.935 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:00.935 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:00.935 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.935 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.935 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.935 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:00.935 00:09:00.935 real 0m9.838s 00:09:00.935 user 0m2.164s 00:09:00.935 sys 0m5.626s 00:09:00.935 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.935 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:00.935 ************************************ 00:09:00.935 END TEST nvmf_target_multipath 00:09:00.935 ************************************ 00:09:00.935 07:07:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:00.935 07:07:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:00.935 07:07:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:00.935 07:07:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.935 ************************************ 00:09:00.935 START TEST nvmf_zcopy 00:09:00.935 ************************************ 00:09:00.935 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:00.935 * Looking for test storage... 00:09:00.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.935 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:00.935 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:00.935 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:01.195 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:01.195 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.195 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.195 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.195 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.195 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.195 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.195 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.195 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:01.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.196 --rc genhtml_branch_coverage=1 00:09:01.196 --rc genhtml_function_coverage=1 00:09:01.196 --rc genhtml_legend=1 00:09:01.196 --rc geninfo_all_blocks=1 00:09:01.196 --rc geninfo_unexecuted_blocks=1 00:09:01.196 00:09:01.196 ' 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:01.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.196 --rc genhtml_branch_coverage=1 00:09:01.196 --rc genhtml_function_coverage=1 00:09:01.196 --rc genhtml_legend=1 00:09:01.196 --rc geninfo_all_blocks=1 00:09:01.196 --rc geninfo_unexecuted_blocks=1 00:09:01.196 00:09:01.196 ' 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:01.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.196 --rc genhtml_branch_coverage=1 00:09:01.196 --rc genhtml_function_coverage=1 00:09:01.196 --rc genhtml_legend=1 00:09:01.196 --rc geninfo_all_blocks=1 00:09:01.196 --rc geninfo_unexecuted_blocks=1 00:09:01.196 00:09:01.196 ' 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:01.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.196 --rc genhtml_branch_coverage=1 00:09:01.196 --rc genhtml_function_coverage=1 00:09:01.196 --rc genhtml_legend=1 00:09:01.196 --rc geninfo_all_blocks=1 00:09:01.196 --rc geninfo_unexecuted_blocks=1 00:09:01.196 00:09:01.196 ' 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.196 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:01.197 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:01.197 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.197 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.197 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.197 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.197 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.197 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.197 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.197 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:01.197 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:01.197 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.197 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:09.335 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:09.335 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:09.335 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:09.335 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:09.335 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:09.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:09:09.335 00:09:09.335 --- 10.0.0.2 ping statistics --- 00:09:09.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.335 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:09:09.336 00:09:09.336 --- 10.0.0.1 ping statistics --- 00:09:09.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.336 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3361521 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3361521 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3361521 ']' 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:09.336 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.336 [2024-11-20 07:07:30.909442] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:09:09.336 [2024-11-20 07:07:30.909509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.336 [2024-11-20 07:07:31.008818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.336 [2024-11-20 07:07:31.058200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.336 [2024-11-20 07:07:31.058248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.336 [2024-11-20 07:07:31.058257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.336 [2024-11-20 07:07:31.058264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.336 [2024-11-20 07:07:31.058271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.336 [2024-11-20 07:07:31.059023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.598 [2024-11-20 07:07:31.788692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.598 [2024-11-20 07:07:31.812994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.598 malloc0 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:09.598 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:09.598 { 00:09:09.598 "params": { 00:09:09.598 "name": "Nvme$subsystem", 00:09:09.598 "trtype": "$TEST_TRANSPORT", 00:09:09.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.598 "adrfam": "ipv4", 00:09:09.598 "trsvcid": "$NVMF_PORT", 00:09:09.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.598 "hdgst": ${hdgst:-false}, 00:09:09.598 "ddgst": ${ddgst:-false} 00:09:09.598 }, 00:09:09.598 "method": "bdev_nvme_attach_controller" 00:09:09.599 } 00:09:09.599 EOF 00:09:09.599 )") 00:09:09.599 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:09.860 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:09.860 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:09.860 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:09.860 "params": { 00:09:09.860 "name": "Nvme1", 00:09:09.860 "trtype": "tcp", 00:09:09.860 "traddr": "10.0.0.2", 00:09:09.860 "adrfam": "ipv4", 00:09:09.860 "trsvcid": "4420", 00:09:09.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.860 "hdgst": false, 00:09:09.860 "ddgst": false 00:09:09.860 }, 00:09:09.860 "method": "bdev_nvme_attach_controller" 00:09:09.860 }' 00:09:09.860 [2024-11-20 07:07:31.916243] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:09:09.860 [2024-11-20 07:07:31.916309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3361776 ] 00:09:09.860 [2024-11-20 07:07:32.008602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.860 [2024-11-20 07:07:32.061721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.121 Running I/O for 10 seconds... 00:09:12.010 6342.00 IOPS, 49.55 MiB/s [2024-11-20T06:07:35.674Z] 6411.50 IOPS, 50.09 MiB/s [2024-11-20T06:07:36.618Z] 7393.00 IOPS, 57.76 MiB/s [2024-11-20T06:07:37.563Z] 7952.00 IOPS, 62.12 MiB/s [2024-11-20T06:07:38.505Z] 8290.40 IOPS, 64.77 MiB/s [2024-11-20T06:07:39.448Z] 8513.50 IOPS, 66.51 MiB/s [2024-11-20T06:07:40.392Z] 8670.14 IOPS, 67.74 MiB/s [2024-11-20T06:07:41.336Z] 8789.75 IOPS, 68.67 MiB/s [2024-11-20T06:07:42.285Z] 8880.00 IOPS, 69.38 MiB/s [2024-11-20T06:07:42.285Z] 8956.50 IOPS, 69.97 MiB/s 00:09:20.007 Latency(us) 00:09:20.007 [2024-11-20T06:07:42.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.007 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:20.007 Verification LBA range: start 0x0 length 0x1000 00:09:20.007 Nvme1n1 : 10.01 8958.14 69.99 0.00 0.00 14241.66 1672.53 28835.84 00:09:20.007 [2024-11-20T06:07:42.285Z] =================================================================================================================== 00:09:20.007 [2024-11-20T06:07:42.285Z] Total : 8958.14 69.99 0.00 0.00 14241.66 1672.53 28835.84 00:09:20.269 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3363868 00:09:20.269 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:20.269 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.269 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:20.269 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:20.269 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:20.269 [2024-11-20 07:07:42.368915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.269 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:20.269 [2024-11-20 07:07:42.368942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.269 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:20.269 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:20.269 { 00:09:20.269 "params": { 00:09:20.269 "name": "Nvme$subsystem", 00:09:20.269 "trtype": "$TEST_TRANSPORT", 00:09:20.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:20.269 "adrfam": "ipv4", 00:09:20.269 "trsvcid": "$NVMF_PORT", 00:09:20.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:20.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:20.269 "hdgst": ${hdgst:-false}, 00:09:20.269 "ddgst": ${ddgst:-false} 00:09:20.269 }, 00:09:20.269 "method": "bdev_nvme_attach_controller" 00:09:20.269 } 00:09:20.269 EOF 00:09:20.269 )") 00:09:20.269 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:20.269 [2024-11-20 07:07:42.376908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.269 [2024-11-20 07:07:42.376917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.269 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:20.269 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:20.269 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:20.269 "params": { 00:09:20.269 "name": "Nvme1", 00:09:20.269 "trtype": "tcp", 00:09:20.269 "traddr": "10.0.0.2", 00:09:20.269 "adrfam": "ipv4", 00:09:20.269 "trsvcid": "4420", 00:09:20.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:20.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:20.269 "hdgst": false, 00:09:20.269 "ddgst": false 00:09:20.269 }, 00:09:20.269 "method": "bdev_nvme_attach_controller" 00:09:20.269 }' 00:09:20.269 [2024-11-20 07:07:42.384927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.269 [2024-11-20 07:07:42.384935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.269 [2024-11-20 07:07:42.392948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.269 [2024-11-20 07:07:42.392956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.269 [2024-11-20 07:07:42.404980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.269 [2024-11-20 07:07:42.404993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.269 [2024-11-20 07:07:42.416014] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:09:20.269 [2024-11-20 07:07:42.416063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3363868 ] 00:09:20.269 [2024-11-20 07:07:42.417010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.269 [2024-11-20 07:07:42.417020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.269 [2024-11-20 07:07:42.429041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.269 [2024-11-20 07:07:42.429048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.269 [2024-11-20 07:07:42.441070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.269 [2024-11-20 07:07:42.441078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.269 [2024-11-20 07:07:42.449094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.269 [2024-11-20 07:07:42.449101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.269 [2024-11-20 07:07:42.457112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.269 [2024-11-20 07:07:42.457120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.269 [2024-11-20 07:07:42.465133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.270 [2024-11-20 07:07:42.465141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.270 [2024-11-20 07:07:42.473153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.270 [2024-11-20 07:07:42.473163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.270 [2024-11-20 07:07:42.481177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.270 [2024-11-20 07:07:42.481184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.270 [2024-11-20 07:07:42.489197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.270 [2024-11-20 07:07:42.489205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.270 [2024-11-20 07:07:42.498326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.270 [2024-11-20 07:07:42.501226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.270 [2024-11-20 07:07:42.501234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.270 [2024-11-20 07:07:42.513255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.270 [2024-11-20 07:07:42.513263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.270 [2024-11-20 07:07:42.525285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.270 [2024-11-20 07:07:42.525295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.270 [2024-11-20 07:07:42.527608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.270 [2024-11-20 07:07:42.537317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.270 [2024-11-20 07:07:42.537325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.549352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.549364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.561379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.561391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.573407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.573418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.585436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.585444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.597475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.597494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.609498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.609508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.621530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.621541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.633561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.633568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.645591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.645599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.657624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.657632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.669656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.669667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.681686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.681695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.693717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.693724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.705746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.705754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.717780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.717790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.729808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.729816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.741839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.741847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.753870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.753878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.765903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.765913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.777935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.777942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.789967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.789975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 [2024-11-20 07:07:42.802002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.530 [2024-11-20 07:07:42.802013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:42.814034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.814043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:42.826070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.826085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 Running I/O for 5 seconds... 00:09:20.791 [2024-11-20 07:07:42.838095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.838105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:42.853548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.853566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:42.867174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.867192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:42.881000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.881016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:42.893824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.893840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:42.906488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.906504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:42.920537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.920552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:42.934378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.934394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:42.947006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.947022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:42.959570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.959585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:42.972434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.972450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:42.985004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.985019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:42.998474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:42.998490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:43.012211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:43.012227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:43.024645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:43.024660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:43.038262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:43.038277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:43.050937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:43.050959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.791 [2024-11-20 07:07:43.064541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.791 [2024-11-20 07:07:43.064556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.077433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.077448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.091007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.091023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.104421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.104436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.118066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.118081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.131546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.131561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.144316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.144331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.157656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.157671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.170222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.170237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.182578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.182593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.195262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.195277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.208161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.208176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.220828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.220843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.234276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.234292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.247579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.247597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.260931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.260946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.273532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.273549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.287287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.287302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.300591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.300611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.052 [2024-11-20 07:07:43.314095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.052 [2024-11-20 07:07:43.314110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.327290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.327305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.340846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.340861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.354086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.354101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.367906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.367921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.380400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.380415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.393711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.393727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.407222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.407238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.420412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.420429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.433427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.433443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.446894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.446910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.460423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.460439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.474056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.474071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.487907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.487921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.501065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.501081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.514605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.514622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.527599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.527614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.541166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.541182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.554756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.554772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.567325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.567341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.313 [2024-11-20 07:07:43.580253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.313 [2024-11-20 07:07:43.580268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.573 [2024-11-20 07:07:43.593067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.593082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.606143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.606164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.619531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.619547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.632517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.632532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.646170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.646186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.658864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.658879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.671424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.671440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.684761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.684776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.697683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.697698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.710454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.710469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.723354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.723370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.735710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.735725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.748351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.748367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.761341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.761357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.773853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.773869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.786692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.786708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.799929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.799945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.812108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.812125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.824865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.824880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.574 [2024-11-20 07:07:43.837723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.574 [2024-11-20 07:07:43.837737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 19045.00 IOPS, 148.79 MiB/s [2024-11-20T06:07:44.113Z] [2024-11-20 07:07:43.850976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:43.850992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:43.864083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:43.864099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:43.876931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:43.876947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:43.889592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:43.889608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:43.902886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:43.902902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:43.916428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:43.916443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:43.929086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:43.929102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:43.942268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:43.942284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:43.955757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:43.955773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:43.969149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:43.969169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:43.982559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:43.982574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:43.995982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:43.995998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:44.009414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:44.009430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:44.022652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:44.022667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:44.035314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:44.035329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:44.047649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:44.047665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:44.060383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:44.060398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:44.073415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:44.073430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:44.086260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:44.086275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.835 [2024-11-20 07:07:44.099576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.835 [2024-11-20 07:07:44.099592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.113153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.113174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.126248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.126263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.139717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.139733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.153403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.153418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.166714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.166729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.179073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.179088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.192106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.192121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.204650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.204665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.217246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.217261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.229920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.229935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.243506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.243521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.257156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.257174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.270716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.270731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.284151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.284173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.297635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.297650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.310602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.310617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.323643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.323658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.336684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.336699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.349848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.349863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.096 [2024-11-20 07:07:44.362403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.096 [2024-11-20 07:07:44.362418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.375695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.375710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.389185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.389201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.402372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.402387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.415133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.415148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.427958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.427973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.440852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.440867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.454419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.454434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.467866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.467881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.480407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.480422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.493369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.493384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.506092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.506107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.519064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.519079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.531673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.531691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.545131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.545146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.558540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.558555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.571764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.571779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.584973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.584988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.598567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.598582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.611419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.611434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.357 [2024-11-20 07:07:44.624475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.357 [2024-11-20 07:07:44.624491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.637090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.637106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.650905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.650919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.664485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.664500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.677031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.677046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.690219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.690234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.702972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.702987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.716436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.716451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.729434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.729449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.743180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.743196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.756331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.756345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.769852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.769867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.782611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.782630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.795579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.795594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.808295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.808310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.822312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.822327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.834905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.834920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 19170.00 IOPS, 149.77 MiB/s [2024-11-20T06:07:44.896Z] [2024-11-20 07:07:44.848414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.848430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.861772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.861787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.874426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.874442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.618 [2024-11-20 07:07:44.888130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.618 [2024-11-20 07:07:44.888146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:44.901722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:44.901740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:44.914711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:44.914726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:44.927969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:44.927984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:44.940656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:44.940671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:44.953349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:44.953364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:44.966859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:44.966874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:44.980686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:44.980701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:44.993600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:44.993615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:45.006527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:45.006542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:45.019200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:45.019215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:45.032431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:45.032446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:45.045961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:45.045976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:45.059241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:45.059257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:45.071748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:45.071763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:45.084258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:45.084273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:45.097647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:45.097662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:45.111129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:45.111144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:45.123944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:45.123959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:45.137298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:45.137313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.879 [2024-11-20 07:07:45.150150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.879 [2024-11-20 07:07:45.150171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.163451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.163468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.176293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.176309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.189517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.189533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.202473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.202489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.215587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.215603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.228586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.228602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.242074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.242090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.255580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.255595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.269227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.269243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.282980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.282996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.296581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.296597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.309661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.309677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.323137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.323153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.336535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.336550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.349153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.349174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.362616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.362632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.376001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.376017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.389331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.389346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.139 [2024-11-20 07:07:45.402671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.139 [2024-11-20 07:07:45.402686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.399 [2024-11-20 07:07:45.415658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.399 [2024-11-20 07:07:45.415674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.399 [2024-11-20 07:07:45.429106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.399 [2024-11-20 07:07:45.429121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.399 [2024-11-20 07:07:45.442537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.399 [2024-11-20 07:07:45.442553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.399 [2024-11-20 07:07:45.454965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.399 [2024-11-20 07:07:45.454981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.399 [2024-11-20 07:07:45.468738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.399 [2024-11-20 07:07:45.468753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.399 [2024-11-20 07:07:45.482391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.399 [2024-11-20 07:07:45.482406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.399 [2024-11-20 07:07:45.495974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.399 [2024-11-20 07:07:45.495989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.399 [2024-11-20 07:07:45.509343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.400 [2024-11-20 07:07:45.509358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.400 [2024-11-20 07:07:45.521912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.400 [2024-11-20 07:07:45.521927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.400 [2024-11-20 07:07:45.535073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.400 [2024-11-20 07:07:45.535089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.400 [2024-11-20 07:07:45.548519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.400 [2024-11-20 07:07:45.548534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.400 [2024-11-20 07:07:45.562103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.400 [2024-11-20 07:07:45.562118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.400 [2024-11-20 07:07:45.575641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.400 [2024-11-20 07:07:45.575656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.400 [2024-11-20 07:07:45.588446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.400 [2024-11-20 07:07:45.588461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.400 [2024-11-20 07:07:45.601729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.400 [2024-11-20 07:07:45.601745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.400 [2024-11-20 07:07:45.614102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.400 [2024-11-20 07:07:45.614117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.400 [2024-11-20 07:07:45.627701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.400 [2024-11-20 07:07:45.627716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.400 [2024-11-20 07:07:45.640813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.400 [2024-11-20 07:07:45.640829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.400 [2024-11-20 07:07:45.653934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.400 [2024-11-20 07:07:45.653950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.400 [2024-11-20 07:07:45.666470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.400 [2024-11-20 07:07:45.666485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.660 [2024-11-20 07:07:45.679836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.660 [2024-11-20 07:07:45.679852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.660 [2024-11-20 07:07:45.693298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.660 [2024-11-20 07:07:45.693313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.660 [2024-11-20 07:07:45.705724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.660 [2024-11-20 07:07:45.705739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.660 [2024-11-20 07:07:45.718949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.660 [2024-11-20 07:07:45.718965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.660 [2024-11-20 07:07:45.732731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.660 [2024-11-20 07:07:45.732746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.660 [2024-11-20 07:07:45.746087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.660 [2024-11-20 07:07:45.746102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.660 [2024-11-20 07:07:45.759577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.660 [2024-11-20 07:07:45.759593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.660 [2024-11-20 07:07:45.772092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.660 [2024-11-20 07:07:45.772107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.660 [2024-11-20 07:07:45.785467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.660 [2024-11-20 07:07:45.785483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.660 [2024-11-20 07:07:45.798904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.660 [2024-11-20 07:07:45.798919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.660 [2024-11-20 07:07:45.812398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.661 [2024-11-20 07:07:45.812413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.661 [2024-11-20 07:07:45.825831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.661 [2024-11-20 07:07:45.825846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.661 [2024-11-20 07:07:45.839635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.661 [2024-11-20 07:07:45.839651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.661 19207.33 IOPS, 150.06 MiB/s [2024-11-20T06:07:45.939Z] [2024-11-20 07:07:45.852869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.661 [2024-11-20 07:07:45.852884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.661 [2024-11-20 07:07:45.865994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.661 [2024-11-20 07:07:45.866009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.661 [2024-11-20 07:07:45.879000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.661 [2024-11-20 07:07:45.879015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.661 [2024-11-20 07:07:45.892019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.661 [2024-11-20 07:07:45.892034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.661 [2024-11-20 07:07:45.905170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.661 [2024-11-20 07:07:45.905185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.661 [2024-11-20 07:07:45.918822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.661 [2024-11-20 07:07:45.918838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.661 [2024-11-20 07:07:45.931576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.661 [2024-11-20 07:07:45.931591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:45.944294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:45.944309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:45.957480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:45.957495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:45.970791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:45.970807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:45.984388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:45.984403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:45.997661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:45.997676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.010601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.010616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.023957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.023977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.037320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.037335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.050871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.050886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.063658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.063673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.076521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.076536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.088809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.088823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.102050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.102065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.115621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.115636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.128900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.128916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.141463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.141478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.154774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.154789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.167881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.167896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.181522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.181538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.922 [2024-11-20 07:07:46.194619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.922 [2024-11-20 07:07:46.194634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.183 [2024-11-20 07:07:46.207545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.183 [2024-11-20 07:07:46.207560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.183 [2024-11-20 07:07:46.221053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.183 [2024-11-20 07:07:46.221068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.183 [2024-11-20 07:07:46.233740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.183 [2024-11-20 07:07:46.233755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.183 [2024-11-20 07:07:46.246761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.183 [2024-11-20 07:07:46.246777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.183 [2024-11-20 07:07:46.260213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.183 [2024-11-20 07:07:46.260229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.183 [2024-11-20 07:07:46.273388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.183 [2024-11-20 07:07:46.273407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.183 [2024-11-20 07:07:46.286623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.183 [2024-11-20 07:07:46.286638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.183 [2024-11-20 07:07:46.299886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.183 [2024-11-20 07:07:46.299901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.183 [2024-11-20 07:07:46.312964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.183 [2024-11-20 07:07:46.312979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.183 [2024-11-20 07:07:46.326573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.183 [2024-11-20 07:07:46.326588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.183 [2024-11-20 07:07:46.339779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.183 [2024-11-20 07:07:46.339794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.183 [2024-11-20 07:07:46.352971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.183 [2024-11-20 07:07:46.352985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.183 [2024-11-20 07:07:46.365932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.183 [2024-11-20 07:07:46.365947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.183 [2024-11-20 07:07:46.379429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.184 [2024-11-20 07:07:46.379444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.184 [2024-11-20 07:07:46.392267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.184 [2024-11-20 07:07:46.392283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.184 [2024-11-20 07:07:46.405032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.184 [2024-11-20 07:07:46.405047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.184 [2024-11-20 07:07:46.417961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.184 [2024-11-20 07:07:46.417975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.184 [2024-11-20 07:07:46.431498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.184 [2024-11-20 07:07:46.431513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.184 [2024-11-20 07:07:46.444610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.184 [2024-11-20 07:07:46.444625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.184 [2024-11-20 07:07:46.457812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.184 [2024-11-20 07:07:46.457828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.471474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.471489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.484789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.484804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.498060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.498075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.511606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.511621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.525235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.525254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.538684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.538699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.551011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.551026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.563972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.563987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.577509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.577524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.590221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.590236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.603559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.603574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.616903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.616918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.630598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.630614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.643174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.643189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.656192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.656208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.668763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.668779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.682230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.682245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.695298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.695313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.444 [2024-11-20 07:07:46.708011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.444 [2024-11-20 07:07:46.708026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.705 [2024-11-20 07:07:46.721632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.705 [2024-11-20 07:07:46.721648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.705 [2024-11-20 07:07:46.735041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.705 [2024-11-20 07:07:46.735056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.748303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.748318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.761944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.761959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.775647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.775663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.789034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.789050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.802435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.802450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.815427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.815442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.828896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.828911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.842147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.842168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 19223.00 IOPS, 150.18 MiB/s [2024-11-20T06:07:46.984Z] [2024-11-20 07:07:46.855002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.855018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.868559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.868575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.882277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.882293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.895420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.895436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.908568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.908584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.922167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.922182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.934882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.934898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.948051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.948067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.960498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.960515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.706 [2024-11-20 07:07:46.974303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.706 [2024-11-20 07:07:46.974319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:46.986670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:46.986686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:46.999249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:46.999264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.012983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.012998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.026489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.026505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.040299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.040316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.052751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.052767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.065965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.065981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.078825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.078841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.092197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.092212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.105304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.105319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.118442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.118458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.132107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.132122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.145600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.145616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.158172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.158187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.170905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.170921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.184309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.184324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.196827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.196843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.209457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.209472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.222723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.222739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.968 [2024-11-20 07:07:47.235682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.968 [2024-11-20 07:07:47.235697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.249471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.249486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.262485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.262504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.275836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.275852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.289168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.289184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.302888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.302903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.315364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.315380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.328544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.328559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.341522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.341537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.354060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.354075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.367388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.367403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.380807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.380823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.393875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.393890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.406560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.406575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.419208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.419223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.431736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.431751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.445458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.445473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.457973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.457988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.470448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.470464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.483736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.483752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.229 [2024-11-20 07:07:47.496760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.229 [2024-11-20 07:07:47.496776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.490 [2024-11-20 07:07:47.509722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.490 [2024-11-20 07:07:47.509741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.490 [2024-11-20 07:07:47.522281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.490 [2024-11-20 07:07:47.522296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.490 [2024-11-20 07:07:47.535127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.490 [2024-11-20 07:07:47.535142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.490 [2024-11-20 07:07:47.547763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.490 [2024-11-20 07:07:47.547779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.490 [2024-11-20 07:07:47.560804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.490 [2024-11-20 07:07:47.560819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.490 [2024-11-20 07:07:47.573370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.490 [2024-11-20 07:07:47.573385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.490 [2024-11-20 07:07:47.586570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.490 [2024-11-20 07:07:47.586586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.491 [2024-11-20 07:07:47.600278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.491 [2024-11-20 07:07:47.600293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.491 [2024-11-20 07:07:47.613853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.491 [2024-11-20 07:07:47.613867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.491 [2024-11-20 07:07:47.627584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.491 [2024-11-20 07:07:47.627599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.491 [2024-11-20 07:07:47.639950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.491 [2024-11-20 07:07:47.639965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.491 [2024-11-20 07:07:47.653256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.491 [2024-11-20 07:07:47.653271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.491 [2024-11-20 07:07:47.666698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.491 [2024-11-20 07:07:47.666713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.491 [2024-11-20 07:07:47.678925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.491 [2024-11-20 07:07:47.678940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.491 [2024-11-20 07:07:47.691483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.491 [2024-11-20 07:07:47.691497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.491 [2024-11-20 07:07:47.704924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.491 [2024-11-20 07:07:47.704939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.491 [2024-11-20 07:07:47.718116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.491 [2024-11-20 07:07:47.718131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.491 [2024-11-20 07:07:47.731624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.491 [2024-11-20 07:07:47.731639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.491 [2024-11-20 07:07:47.744413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.491 [2024-11-20 07:07:47.744428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.491 [2024-11-20 07:07:47.757684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.491 [2024-11-20 07:07:47.757703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.778 [2024-11-20 07:07:47.770419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.778 [2024-11-20 07:07:47.770435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.778 [2024-11-20 07:07:47.784170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.778 [2024-11-20 07:07:47.784186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.778 [2024-11-20 07:07:47.796822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.778 [2024-11-20 07:07:47.796837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.778 [2024-11-20 07:07:47.810400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.778 [2024-11-20 07:07:47.810414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.778 [2024-11-20 07:07:47.823219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.778 [2024-11-20 07:07:47.823234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.778 [2024-11-20 07:07:47.835658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.778 [2024-11-20 07:07:47.835673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.778 [2024-11-20 07:07:47.849539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.778 [2024-11-20 07:07:47.849554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.778 19229.20 IOPS, 150.23 MiB/s 00:09:25.778 Latency(us) 00:09:25.778 [2024-11-20T06:07:48.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.778 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:25.778 Nvme1n1 : 5.01 19230.54 150.24 0.00 0.00 6650.56 2990.08 14417.92 00:09:25.778 [2024-11-20T06:07:48.056Z] =================================================================================================================== 00:09:25.778 [2024-11-20T06:07:48.056Z] Total : 19230.54 150.24 0.00 0.00 6650.56 2990.08 14417.92 00:09:25.778 [2024-11-20 07:07:47.859116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.778 [2024-11-20 07:07:47.859130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.778 [2024-11-20 07:07:47.871143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.778 [2024-11-20 07:07:47.871155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.778 [2024-11-20 07:07:47.883180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.778 [2024-11-20 07:07:47.883192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.778 [2024-11-20 07:07:47.895206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.778 [2024-11-20 07:07:47.895217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.778 [2024-11-20 07:07:47.907232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.778 [2024-11-20 07:07:47.907242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.778 [2024-11-20 07:07:47.919261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.778 [2024-11-20 07:07:47.919270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.778 [2024-11-20 07:07:47.931289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.778 [2024-11-20 07:07:47.931297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.779 [2024-11-20 07:07:47.943322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.779 [2024-11-20 07:07:47.943332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.779 [2024-11-20 07:07:47.955351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.779 [2024-11-20 07:07:47.955360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3363868) - No such process 00:09:25.779 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3363868 00:09:25.779 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.779 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.779 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.779 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.779 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:25.779 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.779 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.779 delay0 00:09:25.779 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.779 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:25.779 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.779 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.779 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.779 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:26.040 [2024-11-20 07:07:48.129728] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:32.622 [2024-11-20 07:07:54.279382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b2560 is same with the state(6) to be set 00:09:32.622 [2024-11-20 07:07:54.279411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b2560 is same with the state(6) to be set 00:09:32.622 Initializing NVMe Controllers 00:09:32.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:32.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:32.622 Initialization complete. Launching workers. 00:09:32.622 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 261 00:09:32.622 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 550, failed to submit 31 00:09:32.622 success 342, unsuccessful 208, failed 0 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.622 rmmod nvme_tcp 00:09:32.622 rmmod nvme_fabrics 00:09:32.622 rmmod nvme_keyring 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3361521 ']' 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3361521 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3361521 ']' 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3361521 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3361521 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3361521' 00:09:32.622 killing process with pid 3361521 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3361521 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3361521 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.622 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.534 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:34.534 00:09:34.534 real 0m33.553s 00:09:34.534 user 0m44.236s 00:09:34.534 sys 0m11.189s 00:09:34.534 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:34.534 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.534 ************************************ 00:09:34.534 END TEST nvmf_zcopy 00:09:34.534 ************************************ 00:09:34.534 07:07:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:34.534 07:07:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:34.534 07:07:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:34.534 07:07:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.534 ************************************ 00:09:34.534 START TEST nvmf_nmic 00:09:34.534 ************************************ 00:09:34.534 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:34.534 * Looking for test storage... 00:09:34.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.534 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:34.534 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:34.534 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:34.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.795 --rc genhtml_branch_coverage=1 00:09:34.795 --rc genhtml_function_coverage=1 00:09:34.795 --rc genhtml_legend=1 00:09:34.795 --rc geninfo_all_blocks=1 00:09:34.795 --rc geninfo_unexecuted_blocks=1 00:09:34.795 00:09:34.795 ' 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:34.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.795 --rc genhtml_branch_coverage=1 00:09:34.795 --rc genhtml_function_coverage=1 00:09:34.795 --rc genhtml_legend=1 00:09:34.795 --rc geninfo_all_blocks=1 00:09:34.795 --rc geninfo_unexecuted_blocks=1 00:09:34.795 00:09:34.795 ' 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:34.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.795 --rc genhtml_branch_coverage=1 00:09:34.795 --rc genhtml_function_coverage=1 00:09:34.795 --rc genhtml_legend=1 00:09:34.795 --rc geninfo_all_blocks=1 00:09:34.795 --rc geninfo_unexecuted_blocks=1 00:09:34.795 00:09:34.795 ' 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:34.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.795 --rc genhtml_branch_coverage=1 00:09:34.795 --rc genhtml_function_coverage=1 00:09:34.795 --rc genhtml_legend=1 00:09:34.795 --rc geninfo_all_blocks=1 00:09:34.795 --rc geninfo_unexecuted_blocks=1 00:09:34.795 00:09:34.795 ' 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.795 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:34.796 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:42.935 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:42.935 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:42.935 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:42.935 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.935 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:42.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:09:42.936 00:09:42.936 --- 10.0.0.2 ping statistics --- 00:09:42.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.936 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:09:42.936 00:09:42.936 --- 10.0.0.1 ping statistics --- 00:09:42.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.936 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3370313 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3370313 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3370313 ']' 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:42.936 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.936 [2024-11-20 07:08:04.481390] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:09:42.936 [2024-11-20 07:08:04.481480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.936 [2024-11-20 07:08:04.584927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.936 [2024-11-20 07:08:04.639907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.936 [2024-11-20 07:08:04.639966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.936 [2024-11-20 07:08:04.639975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.936 [2024-11-20 07:08:04.639982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.936 [2024-11-20 07:08:04.639988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.936 [2024-11-20 07:08:04.642032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.936 [2024-11-20 07:08:04.642212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.936 [2024-11-20 07:08:04.642325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.936 [2024-11-20 07:08:04.642325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.197 [2024-11-20 07:08:05.351102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.197 Malloc0 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.197 [2024-11-20 07:08:05.430310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:43.197 test case1: single bdev can't be used in multiple subsystems 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.197 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.197 [2024-11-20 07:08:05.466115] bdev.c:8254:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:43.197 [2024-11-20 07:08:05.466143] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:43.197 [2024-11-20 07:08:05.466153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.197 request: 00:09:43.197 { 00:09:43.197 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:43.197 "namespace": { 00:09:43.197 "bdev_name": "Malloc0", 00:09:43.197 "no_auto_visible": false 00:09:43.197 }, 00:09:43.197 "method": "nvmf_subsystem_add_ns", 00:09:43.197 "req_id": 1 00:09:43.458 } 00:09:43.458 Got JSON-RPC error response 00:09:43.458 response: 00:09:43.458 { 00:09:43.458 "code": -32602, 00:09:43.458 "message": "Invalid parameters" 00:09:43.458 } 00:09:43.458 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:43.458 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:43.458 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:43.458 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:43.458 Adding namespace failed - expected result. 00:09:43.459 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:43.459 test case2: host connect to nvmf target in multiple paths 00:09:43.459 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:43.459 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.459 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.459 [2024-11-20 07:08:05.478340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:43.459 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.459 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:44.845 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:46.757 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:46.757 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:46.757 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:46.757 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:46.757 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:48.665 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:48.665 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:48.666 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.666 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:48.666 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.666 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:48.666 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:48.666 [global] 00:09:48.666 thread=1 00:09:48.666 invalidate=1 00:09:48.666 rw=write 00:09:48.666 time_based=1 00:09:48.666 runtime=1 00:09:48.666 ioengine=libaio 00:09:48.666 direct=1 00:09:48.666 bs=4096 00:09:48.666 iodepth=1 00:09:48.666 norandommap=0 00:09:48.666 numjobs=1 00:09:48.666 00:09:48.666 verify_dump=1 00:09:48.666 verify_backlog=512 00:09:48.666 verify_state_save=0 00:09:48.666 do_verify=1 00:09:48.666 verify=crc32c-intel 00:09:48.666 [job0] 00:09:48.666 filename=/dev/nvme0n1 00:09:48.666 Could not set queue depth (nvme0n1) 00:09:48.927 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.927 fio-3.35 00:09:48.927 Starting 1 thread 00:09:49.869 00:09:49.869 job0: (groupid=0, jobs=1): err= 0: pid=3371799: Wed Nov 20 07:08:12 2024 00:09:49.869 read: IOPS=610, BW=2442KiB/s (2500kB/s)(2444KiB/1001msec) 00:09:49.869 slat (nsec): min=6737, max=59751, avg=24154.77, stdev=5332.73 00:09:49.869 clat (usec): min=489, max=1032, avg=829.25, stdev=99.90 00:09:49.869 lat (usec): min=514, max=1057, avg=853.40, stdev=100.23 00:09:49.869 clat percentiles (usec): 00:09:49.869 | 1.00th=[ 586], 5.00th=[ 635], 10.00th=[ 693], 20.00th=[ 750], 00:09:49.869 | 30.00th=[ 775], 40.00th=[ 816], 50.00th=[ 857], 60.00th=[ 881], 00:09:49.869 | 70.00th=[ 898], 80.00th=[ 914], 90.00th=[ 938], 95.00th=[ 963], 00:09:49.869 | 99.00th=[ 1004], 99.50th=[ 1012], 99.90th=[ 1037], 99.95th=[ 1037], 00:09:49.869 | 99.99th=[ 1037] 00:09:49.869 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:49.869 slat (nsec): min=9497, max=67592, avg=28486.12, stdev=9146.86 00:09:49.869 clat (usec): min=127, max=663, avg=427.21, stdev=107.87 00:09:49.869 lat (usec): min=137, max=689, avg=455.70, stdev=111.64 00:09:49.869 clat percentiles (usec): 00:09:49.869 | 1.00th=[ 200], 5.00th=[ 247], 10.00th=[ 273], 20.00th=[ 326], 00:09:49.869 | 30.00th=[ 371], 40.00th=[ 412], 50.00th=[ 445], 60.00th=[ 457], 00:09:49.869 | 70.00th=[ 478], 80.00th=[ 529], 90.00th=[ 570], 95.00th=[ 594], 00:09:49.869 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 660], 99.95th=[ 660], 00:09:49.869 | 99.99th=[ 660] 00:09:49.869 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:49.869 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:49.869 lat (usec) : 250=3.85%, 500=44.34%, 750=22.39%, 1000=28.93% 00:09:49.869 lat (msec) : 2=0.49% 00:09:49.869 cpu : usr=2.40%, sys=4.40%, ctx=1635, majf=0, minf=1 00:09:49.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.869 issued rwts: total=611,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.869 00:09:49.869 Run status group 0 (all jobs): 00:09:49.869 READ: bw=2442KiB/s (2500kB/s), 2442KiB/s-2442KiB/s (2500kB/s-2500kB/s), io=2444KiB (2503kB), run=1001-1001msec 00:09:49.869 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:09:49.869 00:09:49.869 Disk stats (read/write): 00:09:49.869 nvme0n1: ios=562/991, merge=0/0, ticks=469/408, in_queue=877, util=93.39% 00:09:49.869 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:50.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:50.128 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:50.128 rmmod nvme_tcp 00:09:50.128 rmmod nvme_fabrics 00:09:50.128 rmmod nvme_keyring 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3370313 ']' 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3370313 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3370313 ']' 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3370313 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3370313 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3370313' 00:09:50.389 killing process with pid 3370313 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3370313 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3370313 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:50.389 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:50.390 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:50.390 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:50.390 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:50.390 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:50.390 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:50.390 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:50.390 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:50.390 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.390 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.390 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:52.935 00:09:52.935 real 0m17.994s 00:09:52.935 user 0m46.012s 00:09:52.935 sys 0m6.514s 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.935 ************************************ 00:09:52.935 END TEST nvmf_nmic 00:09:52.935 ************************************ 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.935 ************************************ 00:09:52.935 START TEST nvmf_fio_target 00:09:52.935 ************************************ 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:52.935 * Looking for test storage... 00:09:52.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:52.935 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:52.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.936 --rc genhtml_branch_coverage=1 00:09:52.936 --rc genhtml_function_coverage=1 00:09:52.936 --rc genhtml_legend=1 00:09:52.936 --rc geninfo_all_blocks=1 00:09:52.936 --rc geninfo_unexecuted_blocks=1 00:09:52.936 00:09:52.936 ' 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:52.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.936 --rc genhtml_branch_coverage=1 00:09:52.936 --rc genhtml_function_coverage=1 00:09:52.936 --rc genhtml_legend=1 00:09:52.936 --rc geninfo_all_blocks=1 00:09:52.936 --rc geninfo_unexecuted_blocks=1 00:09:52.936 00:09:52.936 ' 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:52.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.936 --rc genhtml_branch_coverage=1 00:09:52.936 --rc genhtml_function_coverage=1 00:09:52.936 --rc genhtml_legend=1 00:09:52.936 --rc geninfo_all_blocks=1 00:09:52.936 --rc geninfo_unexecuted_blocks=1 00:09:52.936 00:09:52.936 ' 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:52.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.936 --rc genhtml_branch_coverage=1 00:09:52.936 --rc genhtml_function_coverage=1 00:09:52.936 --rc genhtml_legend=1 00:09:52.936 --rc geninfo_all_blocks=1 00:09:52.936 --rc geninfo_unexecuted_blocks=1 00:09:52.936 00:09:52.936 ' 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.936 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:52.936 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:01.225 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:01.225 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.225 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:01.226 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:01.226 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:01.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.726 ms 00:10:01.226 00:10:01.226 --- 10.0.0.2 ping statistics --- 00:10:01.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.226 rtt min/avg/max/mdev = 0.726/0.726/0.726/0.000 ms 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:10:01.226 00:10:01.226 --- 10.0.0.1 ping statistics --- 00:10:01.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.226 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3376473 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3376473 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3376473 ']' 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:01.226 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.226 [2024-11-20 07:08:22.576871] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:10:01.226 [2024-11-20 07:08:22.576937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.226 [2024-11-20 07:08:22.675037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.226 [2024-11-20 07:08:22.728212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.226 [2024-11-20 07:08:22.728260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.226 [2024-11-20 07:08:22.728269] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.226 [2024-11-20 07:08:22.728277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.226 [2024-11-20 07:08:22.728283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.226 [2024-11-20 07:08:22.730677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.226 [2024-11-20 07:08:22.730840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.226 [2024-11-20 07:08:22.731002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.226 [2024-11-20 07:08:22.731001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.226 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:01.226 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:10:01.226 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.226 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:01.226 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.226 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.226 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:01.488 [2024-11-20 07:08:23.607590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.488 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.750 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:01.750 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.011 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:02.011 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.272 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:02.272 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.272 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:02.272 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:02.534 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.795 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:02.795 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.055 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:03.055 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.316 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:03.316 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:03.316 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:03.577 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:03.577 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.838 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:03.838 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:04.100 07:08:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.100 [2024-11-20 07:08:26.323448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.100 07:08:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:04.361 07:08:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:04.622 07:08:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.538 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:06.538 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:10:06.538 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:06.538 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:10:06.538 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:10:06.538 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:10:08.465 07:08:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:08.465 07:08:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:08.465 07:08:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:08.465 07:08:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:10:08.465 07:08:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:08.465 07:08:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:10:08.465 07:08:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:08.465 [global] 00:10:08.465 thread=1 00:10:08.465 invalidate=1 00:10:08.465 rw=write 00:10:08.465 time_based=1 00:10:08.465 runtime=1 00:10:08.465 ioengine=libaio 00:10:08.465 direct=1 00:10:08.466 bs=4096 00:10:08.466 iodepth=1 00:10:08.466 norandommap=0 00:10:08.466 numjobs=1 00:10:08.466 00:10:08.466 verify_dump=1 00:10:08.466 verify_backlog=512 00:10:08.466 verify_state_save=0 00:10:08.466 do_verify=1 00:10:08.466 verify=crc32c-intel 00:10:08.466 [job0] 00:10:08.466 filename=/dev/nvme0n1 00:10:08.466 [job1] 00:10:08.466 filename=/dev/nvme0n2 00:10:08.466 [job2] 00:10:08.466 filename=/dev/nvme0n3 00:10:08.466 [job3] 00:10:08.466 filename=/dev/nvme0n4 00:10:08.466 Could not set queue depth (nvme0n1) 00:10:08.466 Could not set queue depth (nvme0n2) 00:10:08.466 Could not set queue depth (nvme0n3) 00:10:08.466 Could not set queue depth (nvme0n4) 00:10:08.730 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:08.730 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:08.730 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:08.730 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:08.730 fio-3.35 00:10:08.730 Starting 4 threads 00:10:10.137 00:10:10.137 job0: (groupid=0, jobs=1): err= 0: pid=3378196: Wed Nov 20 07:08:32 2024 00:10:10.137 read: IOPS=681, BW=2725KiB/s (2791kB/s)(2728KiB/1001msec) 00:10:10.137 slat (nsec): min=6701, max=47857, avg=25754.25, stdev=6977.14 00:10:10.137 clat (usec): min=310, max=1050, avg=774.03, stdev=101.85 00:10:10.137 lat (usec): min=338, max=1077, avg=799.79, stdev=103.05 00:10:10.137 clat percentiles (usec): 00:10:10.137 | 1.00th=[ 494], 5.00th=[ 594], 10.00th=[ 644], 20.00th=[ 693], 00:10:10.137 | 30.00th=[ 725], 40.00th=[ 750], 50.00th=[ 783], 60.00th=[ 816], 00:10:10.137 | 70.00th=[ 840], 80.00th=[ 865], 90.00th=[ 889], 95.00th=[ 914], 00:10:10.137 | 99.00th=[ 979], 99.50th=[ 1004], 99.90th=[ 1057], 99.95th=[ 1057], 00:10:10.137 | 99.99th=[ 1057] 00:10:10.137 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:10.137 slat (nsec): min=9409, max=55108, avg=32192.23, stdev=9377.86 00:10:10.137 clat (usec): min=134, max=1422, avg=399.23, stdev=125.28 00:10:10.137 lat (usec): min=158, max=1458, avg=431.42, stdev=127.06 00:10:10.137 clat percentiles (usec): 00:10:10.137 | 1.00th=[ 184], 5.00th=[ 227], 10.00th=[ 265], 20.00th=[ 297], 00:10:10.137 | 30.00th=[ 314], 40.00th=[ 338], 50.00th=[ 383], 60.00th=[ 424], 00:10:10.137 | 70.00th=[ 457], 80.00th=[ 502], 90.00th=[ 578], 95.00th=[ 611], 00:10:10.137 | 99.00th=[ 685], 99.50th=[ 701], 99.90th=[ 1287], 99.95th=[ 1418], 00:10:10.137 | 99.99th=[ 1418] 00:10:10.137 bw ( KiB/s): min= 4096, max= 4096, per=32.60%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.137 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.137 lat (usec) : 250=4.75%, 500=43.55%, 750=27.32%, 1000=24.09% 00:10:10.137 lat (msec) : 2=0.29% 00:10:10.137 cpu : usr=2.90%, sys=7.20%, ctx=1707, majf=0, minf=1 00:10:10.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.137 issued rwts: total=682,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.137 job1: (groupid=0, jobs=1): err= 0: pid=3378217: Wed Nov 20 07:08:32 2024 00:10:10.137 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:10.137 slat (nsec): min=8865, max=53455, avg=29880.99, stdev=5240.81 00:10:10.137 clat (usec): min=587, max=1258, avg=926.93, stdev=109.95 00:10:10.137 lat (usec): min=599, max=1309, avg=956.81, stdev=110.66 00:10:10.137 clat percentiles (usec): 00:10:10.137 | 1.00th=[ 627], 5.00th=[ 717], 10.00th=[ 783], 20.00th=[ 840], 00:10:10.137 | 30.00th=[ 881], 40.00th=[ 914], 50.00th=[ 938], 60.00th=[ 963], 00:10:10.137 | 70.00th=[ 988], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1090], 00:10:10.137 | 99.00th=[ 1156], 99.50th=[ 1188], 99.90th=[ 1254], 99.95th=[ 1254], 00:10:10.137 | 99.99th=[ 1254] 00:10:10.137 write: IOPS=862, BW=3449KiB/s (3531kB/s)(3452KiB/1001msec); 0 zone resets 00:10:10.137 slat (nsec): min=9282, max=60863, avg=32497.28, stdev=10484.80 00:10:10.137 clat (usec): min=224, max=938, avg=545.29, stdev=133.44 00:10:10.137 lat (usec): min=235, max=973, avg=577.79, stdev=136.73 00:10:10.137 clat percentiles (usec): 00:10:10.137 | 1.00th=[ 251], 5.00th=[ 318], 10.00th=[ 371], 20.00th=[ 424], 00:10:10.137 | 30.00th=[ 474], 40.00th=[ 515], 50.00th=[ 553], 60.00th=[ 578], 00:10:10.137 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 717], 95.00th=[ 758], 00:10:10.137 | 99.00th=[ 832], 99.50th=[ 865], 99.90th=[ 938], 99.95th=[ 938], 00:10:10.137 | 99.99th=[ 938] 00:10:10.137 bw ( KiB/s): min= 4096, max= 4096, per=32.60%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.137 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.137 lat (usec) : 250=0.58%, 500=22.25%, 750=39.05%, 1000=28.95% 00:10:10.137 lat (msec) : 2=9.16% 00:10:10.137 cpu : usr=3.80%, sys=4.80%, ctx=1377, majf=0, minf=1 00:10:10.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.137 issued rwts: total=512,863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.137 job2: (groupid=0, jobs=1): err= 0: pid=3378239: Wed Nov 20 07:08:32 2024 00:10:10.137 read: IOPS=169, BW=680KiB/s (696kB/s)(704KiB/1036msec) 00:10:10.137 slat (nsec): min=8940, max=50148, avg=25503.23, stdev=5606.57 00:10:10.137 clat (usec): min=773, max=42118, avg=4092.72, stdev=10556.05 00:10:10.137 lat (usec): min=800, max=42144, avg=4118.22, stdev=10556.39 00:10:10.137 clat percentiles (usec): 00:10:10.137 | 1.00th=[ 775], 5.00th=[ 955], 10.00th=[ 996], 20.00th=[ 1037], 00:10:10.137 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1172], 00:10:10.137 | 70.00th=[ 1205], 80.00th=[ 1254], 90.00th=[ 1369], 95.00th=[41157], 00:10:10.137 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:10.137 | 99.99th=[42206] 00:10:10.137 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:10:10.137 slat (nsec): min=9943, max=57004, avg=28243.74, stdev=10486.89 00:10:10.137 clat (usec): min=261, max=2349, avg=568.80, stdev=151.74 00:10:10.137 lat (usec): min=272, max=2382, avg=597.05, stdev=156.04 00:10:10.137 clat percentiles (usec): 00:10:10.137 | 1.00th=[ 281], 5.00th=[ 334], 10.00th=[ 375], 20.00th=[ 453], 00:10:10.137 | 30.00th=[ 490], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 611], 00:10:10.137 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 766], 00:10:10.137 | 99.00th=[ 840], 99.50th=[ 873], 99.90th=[ 2343], 99.95th=[ 2343], 00:10:10.137 | 99.99th=[ 2343] 00:10:10.137 bw ( KiB/s): min= 4096, max= 4096, per=32.60%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.137 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.137 lat (usec) : 500=23.40%, 750=46.08%, 1000=7.41% 00:10:10.137 lat (msec) : 2=21.08%, 4=0.15%, 50=1.89% 00:10:10.137 cpu : usr=1.16%, sys=1.55%, ctx=691, majf=0, minf=1 00:10:10.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.137 issued rwts: total=176,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.137 job3: (groupid=0, jobs=1): err= 0: pid=3378246: Wed Nov 20 07:08:32 2024 00:10:10.137 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:10.137 slat (nsec): min=7283, max=64535, avg=28012.55, stdev=4107.22 00:10:10.137 clat (usec): min=546, max=1443, avg=955.61, stdev=112.82 00:10:10.137 lat (usec): min=592, max=1470, avg=983.62, stdev=112.49 00:10:10.137 clat percentiles (usec): 00:10:10.137 | 1.00th=[ 635], 5.00th=[ 758], 10.00th=[ 824], 20.00th=[ 873], 00:10:10.138 | 30.00th=[ 906], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[ 988], 00:10:10.138 | 70.00th=[ 1012], 80.00th=[ 1045], 90.00th=[ 1090], 95.00th=[ 1123], 00:10:10.138 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1450], 99.95th=[ 1450], 00:10:10.138 | 99.99th=[ 1450] 00:10:10.138 write: IOPS=854, BW=3417KiB/s (3499kB/s)(3420KiB/1001msec); 0 zone resets 00:10:10.138 slat (nsec): min=9519, max=75189, avg=26992.83, stdev=12715.23 00:10:10.138 clat (usec): min=178, max=1917, avg=542.15, stdev=137.32 00:10:10.138 lat (usec): min=197, max=1927, avg=569.14, stdev=141.30 00:10:10.138 clat percentiles (usec): 00:10:10.138 | 1.00th=[ 258], 5.00th=[ 330], 10.00th=[ 359], 20.00th=[ 441], 00:10:10.138 | 30.00th=[ 478], 40.00th=[ 506], 50.00th=[ 537], 60.00th=[ 570], 00:10:10.138 | 70.00th=[ 603], 80.00th=[ 652], 90.00th=[ 709], 95.00th=[ 750], 00:10:10.138 | 99.00th=[ 840], 99.50th=[ 889], 99.90th=[ 1926], 99.95th=[ 1926], 00:10:10.138 | 99.99th=[ 1926] 00:10:10.138 bw ( KiB/s): min= 4096, max= 4096, per=32.60%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.138 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.138 lat (usec) : 250=0.44%, 500=22.97%, 750=37.75%, 1000=25.75% 00:10:10.138 lat (msec) : 2=13.09% 00:10:10.138 cpu : usr=1.90%, sys=5.60%, ctx=1369, majf=0, minf=1 00:10:10.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.138 issued rwts: total=512,855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.138 00:10:10.138 Run status group 0 (all jobs): 00:10:10.138 READ: bw=7266KiB/s (7441kB/s), 680KiB/s-2725KiB/s (696kB/s-2791kB/s), io=7528KiB (7709kB), run=1001-1036msec 00:10:10.138 WRITE: bw=12.3MiB/s (12.9MB/s), 1977KiB/s-4092KiB/s (2024kB/s-4190kB/s), io=12.7MiB (13.3MB), run=1001-1036msec 00:10:10.138 00:10:10.138 Disk stats (read/write): 00:10:10.138 nvme0n1: ios=534/960, merge=0/0, ticks=1151/288, in_queue=1439, util=83.97% 00:10:10.138 nvme0n2: ios=535/564, merge=0/0, ticks=1368/246, in_queue=1614, util=87.84% 00:10:10.138 nvme0n3: ios=188/512, merge=0/0, ticks=618/269, in_queue=887, util=95.24% 00:10:10.138 nvme0n4: ios=534/589, merge=0/0, ticks=1342/284, in_queue=1626, util=94.22% 00:10:10.138 07:08:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:10.138 [global] 00:10:10.138 thread=1 00:10:10.138 invalidate=1 00:10:10.138 rw=randwrite 00:10:10.138 time_based=1 00:10:10.138 runtime=1 00:10:10.138 ioengine=libaio 00:10:10.138 direct=1 00:10:10.138 bs=4096 00:10:10.138 iodepth=1 00:10:10.138 norandommap=0 00:10:10.138 numjobs=1 00:10:10.138 00:10:10.138 verify_dump=1 00:10:10.138 verify_backlog=512 00:10:10.138 verify_state_save=0 00:10:10.138 do_verify=1 00:10:10.138 verify=crc32c-intel 00:10:10.138 [job0] 00:10:10.138 filename=/dev/nvme0n1 00:10:10.138 [job1] 00:10:10.138 filename=/dev/nvme0n2 00:10:10.138 [job2] 00:10:10.138 filename=/dev/nvme0n3 00:10:10.138 [job3] 00:10:10.138 filename=/dev/nvme0n4 00:10:10.138 Could not set queue depth (nvme0n1) 00:10:10.138 Could not set queue depth (nvme0n2) 00:10:10.138 Could not set queue depth (nvme0n3) 00:10:10.138 Could not set queue depth (nvme0n4) 00:10:10.398 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.398 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.398 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.398 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.398 fio-3.35 00:10:10.398 Starting 4 threads 00:10:11.810 00:10:11.810 job0: (groupid=0, jobs=1): err= 0: pid=3378714: Wed Nov 20 07:08:33 2024 00:10:11.810 read: IOPS=23, BW=95.5KiB/s (97.8kB/s)(96.0KiB/1005msec) 00:10:11.810 slat (nsec): min=9293, max=30722, avg=25075.83, stdev=4934.97 00:10:11.810 clat (usec): min=938, max=42080, avg=28189.92, stdev=19596.41 00:10:11.810 lat (usec): min=969, max=42106, avg=28214.99, stdev=19598.04 00:10:11.810 clat percentiles (usec): 00:10:11.810 | 1.00th=[ 938], 5.00th=[ 1004], 10.00th=[ 1029], 20.00th=[ 1074], 00:10:11.810 | 30.00th=[ 1172], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:10:11.810 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:11.810 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:11.810 | 99.99th=[42206] 00:10:11.810 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:11.810 slat (nsec): min=3031, max=52438, avg=13518.93, stdev=8093.95 00:10:11.810 clat (usec): min=261, max=1003, avg=622.62, stdev=123.86 00:10:11.810 lat (usec): min=271, max=1014, avg=636.14, stdev=124.90 00:10:11.810 clat percentiles (usec): 00:10:11.810 | 1.00th=[ 359], 5.00th=[ 404], 10.00th=[ 453], 20.00th=[ 519], 00:10:11.810 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:10:11.810 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 807], 00:10:11.810 | 99.00th=[ 914], 99.50th=[ 963], 99.90th=[ 1004], 99.95th=[ 1004], 00:10:11.810 | 99.99th=[ 1004] 00:10:11.810 bw ( KiB/s): min= 4096, max= 4096, per=42.53%, avg=4096.00, stdev= 0.00, samples=1 00:10:11.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:11.810 lat (usec) : 500=16.79%, 750=65.49%, 1000=13.25% 00:10:11.810 lat (msec) : 2=1.49%, 50=2.99% 00:10:11.810 cpu : usr=0.30%, sys=0.60%, ctx=537, majf=0, minf=1 00:10:11.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.810 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.810 job1: (groupid=0, jobs=1): err= 0: pid=3378734: Wed Nov 20 07:08:33 2024 00:10:11.810 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1008msec) 00:10:11.810 slat (nsec): min=9870, max=26586, avg=25308.41, stdev=3982.67 00:10:11.810 clat (usec): min=40802, max=41530, avg=41005.64, stdev=171.44 00:10:11.810 lat (usec): min=40828, max=41540, avg=41030.94, stdev=168.30 00:10:11.810 clat percentiles (usec): 00:10:11.810 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:11.810 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:11.810 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:11.810 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:11.810 | 99.99th=[41681] 00:10:11.810 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:11.810 slat (nsec): min=9791, max=65863, avg=30121.75, stdev=9011.20 00:10:11.810 clat (usec): min=188, max=967, avg=567.49, stdev=146.41 00:10:11.810 lat (usec): min=199, max=1000, avg=597.61, stdev=149.03 00:10:11.810 clat percentiles (usec): 00:10:11.810 | 1.00th=[ 269], 5.00th=[ 306], 10.00th=[ 388], 20.00th=[ 441], 00:10:11.810 | 30.00th=[ 478], 40.00th=[ 523], 50.00th=[ 562], 60.00th=[ 611], 00:10:11.810 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 799], 00:10:11.810 | 99.00th=[ 898], 99.50th=[ 955], 99.90th=[ 971], 99.95th=[ 971], 00:10:11.810 | 99.99th=[ 971] 00:10:11.810 bw ( KiB/s): min= 4096, max= 4096, per=42.53%, avg=4096.00, stdev= 0.00, samples=1 00:10:11.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:11.810 lat (usec) : 250=0.38%, 500=34.40%, 750=52.36%, 1000=9.64% 00:10:11.810 lat (msec) : 50=3.21% 00:10:11.810 cpu : usr=0.89%, sys=1.39%, ctx=531, majf=0, minf=1 00:10:11.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.810 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.810 job2: (groupid=0, jobs=1): err= 0: pid=3378756: Wed Nov 20 07:08:33 2024 00:10:11.810 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:11.810 slat (nsec): min=8358, max=30164, avg=25191.56, stdev=1001.31 00:10:11.810 clat (usec): min=489, max=1162, avg=970.28, stdev=62.52 00:10:11.810 lat (usec): min=515, max=1188, avg=995.47, stdev=62.54 00:10:11.810 clat percentiles (usec): 00:10:11.810 | 1.00th=[ 799], 5.00th=[ 873], 10.00th=[ 898], 20.00th=[ 938], 00:10:11.810 | 30.00th=[ 955], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 988], 00:10:11.810 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1029], 95.00th=[ 1074], 00:10:11.810 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1156], 99.95th=[ 1156], 00:10:11.810 | 99.99th=[ 1156] 00:10:11.810 write: IOPS=936, BW=3744KiB/s (3834kB/s)(3748KiB/1001msec); 0 zone resets 00:10:11.810 slat (nsec): min=9491, max=64116, avg=28074.17, stdev=9113.56 00:10:11.810 clat (usec): min=191, max=769, avg=483.52, stdev=130.73 00:10:11.810 lat (usec): min=218, max=803, avg=511.60, stdev=133.38 00:10:11.811 clat percentiles (usec): 00:10:11.811 | 1.00th=[ 212], 5.00th=[ 277], 10.00th=[ 310], 20.00th=[ 355], 00:10:11.811 | 30.00th=[ 396], 40.00th=[ 449], 50.00th=[ 486], 60.00th=[ 529], 00:10:11.811 | 70.00th=[ 570], 80.00th=[ 603], 90.00th=[ 660], 95.00th=[ 685], 00:10:11.811 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 766], 99.95th=[ 766], 00:10:11.811 | 99.99th=[ 766] 00:10:11.811 bw ( KiB/s): min= 4096, max= 4096, per=42.53%, avg=4096.00, stdev= 0.00, samples=1 00:10:11.811 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:11.811 lat (usec) : 250=2.07%, 500=32.85%, 750=29.40%, 1000=26.57% 00:10:11.811 lat (msec) : 2=9.11% 00:10:11.811 cpu : usr=2.50%, sys=3.70%, ctx=1449, majf=0, minf=1 00:10:11.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.811 issued rwts: total=512,937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.811 job3: (groupid=0, jobs=1): err= 0: pid=3378763: Wed Nov 20 07:08:33 2024 00:10:11.811 read: IOPS=18, BW=74.0KiB/s (75.8kB/s)(76.0KiB/1027msec) 00:10:11.811 slat (nsec): min=9873, max=25939, avg=24694.89, stdev=3596.12 00:10:11.811 clat (usec): min=892, max=42059, avg=39576.08, stdev=9376.08 00:10:11.811 lat (usec): min=902, max=42084, avg=39600.78, stdev=9379.66 00:10:11.811 clat percentiles (usec): 00:10:11.811 | 1.00th=[ 889], 5.00th=[ 889], 10.00th=[41157], 20.00th=[41157], 00:10:11.811 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:11.811 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:11.811 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:11.811 | 99.99th=[42206] 00:10:11.811 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:10:11.811 slat (nsec): min=9202, max=49232, avg=26338.64, stdev=9675.53 00:10:11.811 clat (usec): min=226, max=878, avg=502.50, stdev=122.52 00:10:11.811 lat (usec): min=236, max=910, avg=528.84, stdev=127.21 00:10:11.811 clat percentiles (usec): 00:10:11.811 | 1.00th=[ 258], 5.00th=[ 326], 10.00th=[ 347], 20.00th=[ 400], 00:10:11.811 | 30.00th=[ 445], 40.00th=[ 465], 50.00th=[ 486], 60.00th=[ 515], 00:10:11.811 | 70.00th=[ 562], 80.00th=[ 611], 90.00th=[ 676], 95.00th=[ 734], 00:10:11.811 | 99.00th=[ 791], 99.50th=[ 832], 99.90th=[ 881], 99.95th=[ 881], 00:10:11.811 | 99.99th=[ 881] 00:10:11.811 bw ( KiB/s): min= 4096, max= 4096, per=42.53%, avg=4096.00, stdev= 0.00, samples=1 00:10:11.811 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:11.811 lat (usec) : 250=0.94%, 500=53.11%, 750=39.74%, 1000=2.82% 00:10:11.811 lat (msec) : 50=3.39% 00:10:11.811 cpu : usr=0.78%, sys=1.27%, ctx=531, majf=0, minf=1 00:10:11.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.811 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.811 00:10:11.811 Run status group 0 (all jobs): 00:10:11.811 READ: bw=2228KiB/s (2281kB/s), 67.5KiB/s-2046KiB/s (69.1kB/s-2095kB/s), io=2288KiB (2343kB), run=1001-1027msec 00:10:11.811 WRITE: bw=9632KiB/s (9863kB/s), 1994KiB/s-3744KiB/s (2042kB/s-3834kB/s), io=9892KiB (10.1MB), run=1001-1027msec 00:10:11.811 00:10:11.811 Disk stats (read/write): 00:10:11.811 nvme0n1: ios=44/512, merge=0/0, ticks=1436/307, in_queue=1743, util=96.79% 00:10:11.811 nvme0n2: ios=36/512, merge=0/0, ticks=1478/276, in_queue=1754, util=96.84% 00:10:11.811 nvme0n3: ios=563/671, merge=0/0, ticks=657/283, in_queue=940, util=100.00% 00:10:11.811 nvme0n4: ios=14/512, merge=0/0, ticks=546/255, in_queue=801, util=89.52% 00:10:11.811 07:08:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:11.811 [global] 00:10:11.811 thread=1 00:10:11.811 invalidate=1 00:10:11.811 rw=write 00:10:11.811 time_based=1 00:10:11.811 runtime=1 00:10:11.811 ioengine=libaio 00:10:11.811 direct=1 00:10:11.811 bs=4096 00:10:11.811 iodepth=128 00:10:11.811 norandommap=0 00:10:11.811 numjobs=1 00:10:11.811 00:10:11.811 verify_dump=1 00:10:11.811 verify_backlog=512 00:10:11.811 verify_state_save=0 00:10:11.811 do_verify=1 00:10:11.811 verify=crc32c-intel 00:10:11.811 [job0] 00:10:11.811 filename=/dev/nvme0n1 00:10:11.811 [job1] 00:10:11.811 filename=/dev/nvme0n2 00:10:11.811 [job2] 00:10:11.811 filename=/dev/nvme0n3 00:10:11.811 [job3] 00:10:11.811 filename=/dev/nvme0n4 00:10:11.811 Could not set queue depth (nvme0n1) 00:10:11.811 Could not set queue depth (nvme0n2) 00:10:11.811 Could not set queue depth (nvme0n3) 00:10:11.811 Could not set queue depth (nvme0n4) 00:10:12.072 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.072 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.072 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.072 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.072 fio-3.35 00:10:12.072 Starting 4 threads 00:10:13.460 00:10:13.460 job0: (groupid=0, jobs=1): err= 0: pid=3379214: Wed Nov 20 07:08:35 2024 00:10:13.460 read: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec) 00:10:13.460 slat (nsec): min=920, max=42421k, avg=70298.63, stdev=583374.86 00:10:13.460 clat (usec): min=5841, max=51835, avg=8374.73, stdev=3222.45 00:10:13.460 lat (usec): min=5883, max=52216, avg=8445.03, stdev=3270.68 00:10:13.460 clat percentiles (usec): 00:10:13.460 | 1.00th=[ 6259], 5.00th=[ 6915], 10.00th=[ 7373], 20.00th=[ 7701], 00:10:13.460 | 30.00th=[ 7898], 40.00th=[ 8029], 50.00th=[ 8094], 60.00th=[ 8291], 00:10:13.460 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9372], 00:10:13.460 | 99.00th=[10421], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:10:13.460 | 99.99th=[51643] 00:10:13.460 write: IOPS=7906, BW=30.9MiB/s (32.4MB/s)(31.0MiB/1003msec); 0 zone resets 00:10:13.460 slat (nsec): min=1604, max=2747.5k, avg=55721.49, stdev=241395.40 00:10:13.460 clat (usec): min=2028, max=51824, avg=7874.28, stdev=4497.32 00:10:13.460 lat (usec): min=2290, max=51826, avg=7930.00, stdev=4500.85 00:10:13.460 clat percentiles (usec): 00:10:13.460 | 1.00th=[ 5473], 5.00th=[ 6390], 10.00th=[ 6652], 20.00th=[ 6915], 00:10:13.460 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7635], 00:10:13.460 | 70.00th=[ 7898], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 8848], 00:10:13.460 | 99.00th=[49021], 99.50th=[50594], 99.90th=[51119], 99.95th=[51643], 00:10:13.460 | 99.99th=[51643] 00:10:13.460 bw ( KiB/s): min=29904, max=32520, per=30.90%, avg=31212.00, stdev=1849.79, samples=2 00:10:13.460 iops : min= 7476, max= 8130, avg=7803.00, stdev=462.45, samples=2 00:10:13.460 lat (msec) : 4=0.21%, 10=98.01%, 20=0.97%, 50=0.13%, 100=0.69% 00:10:13.460 cpu : usr=3.09%, sys=3.49%, ctx=1094, majf=0, minf=1 00:10:13.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:13.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.460 issued rwts: total=7680,7930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.460 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.460 job1: (groupid=0, jobs=1): err= 0: pid=3379229: Wed Nov 20 07:08:35 2024 00:10:13.460 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:10:13.460 slat (nsec): min=911, max=13260k, avg=118939.38, stdev=792746.37 00:10:13.460 clat (usec): min=2140, max=68131, avg=14938.58, stdev=7759.74 00:10:13.460 lat (usec): min=2149, max=68139, avg=15057.52, stdev=7839.70 00:10:13.460 clat percentiles (usec): 00:10:13.460 | 1.00th=[ 7701], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10290], 00:10:13.460 | 30.00th=[10552], 40.00th=[10945], 50.00th=[12125], 60.00th=[13698], 00:10:13.460 | 70.00th=[15664], 80.00th=[18744], 90.00th=[22938], 95.00th=[28443], 00:10:13.460 | 99.00th=[54264], 99.50th=[59507], 99.90th=[67634], 99.95th=[67634], 00:10:13.460 | 99.99th=[67634] 00:10:13.460 write: IOPS=4140, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1006msec); 0 zone resets 00:10:13.460 slat (nsec): min=1569, max=12726k, avg=117089.14, stdev=715558.99 00:10:13.460 clat (usec): min=688, max=90374, avg=15961.68, stdev=16500.60 00:10:13.460 lat (usec): min=1226, max=90380, avg=16078.77, stdev=16615.37 00:10:13.460 clat percentiles (usec): 00:10:13.460 | 1.00th=[ 2311], 5.00th=[ 5800], 10.00th=[ 7570], 20.00th=[ 8455], 00:10:13.460 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[10290], 60.00th=[13960], 00:10:13.460 | 70.00th=[14615], 80.00th=[15139], 90.00th=[21890], 95.00th=[70779], 00:10:13.460 | 99.00th=[78119], 99.50th=[82314], 99.90th=[90702], 99.95th=[90702], 00:10:13.460 | 99.99th=[90702] 00:10:13.460 bw ( KiB/s): min=14280, max=18488, per=16.22%, avg=16384.00, stdev=2975.51, samples=2 00:10:13.460 iops : min= 3570, max= 4622, avg=4096.00, stdev=743.88, samples=2 00:10:13.460 lat (usec) : 750=0.01% 00:10:13.460 lat (msec) : 2=0.31%, 4=0.71%, 10=31.97%, 20=51.45%, 50=11.39% 00:10:13.460 lat (msec) : 100=4.15% 00:10:13.460 cpu : usr=4.18%, sys=3.18%, ctx=368, majf=0, minf=2 00:10:13.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:13.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.460 issued rwts: total=4096,4165,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.460 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.460 job2: (groupid=0, jobs=1): err= 0: pid=3379248: Wed Nov 20 07:08:35 2024 00:10:13.460 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:10:13.460 slat (nsec): min=944, max=14644k, avg=101619.53, stdev=752151.98 00:10:13.460 clat (usec): min=4440, max=33805, avg=12419.18, stdev=4816.22 00:10:13.460 lat (usec): min=4459, max=33814, avg=12520.80, stdev=4879.19 00:10:13.460 clat percentiles (usec): 00:10:13.460 | 1.00th=[ 5800], 5.00th=[ 7832], 10.00th=[ 8225], 20.00th=[ 8848], 00:10:13.460 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[10945], 60.00th=[11207], 00:10:13.460 | 70.00th=[12911], 80.00th=[15401], 90.00th=[19006], 95.00th=[22676], 00:10:13.460 | 99.00th=[30016], 99.50th=[30540], 99.90th=[33817], 99.95th=[33817], 00:10:13.460 | 99.99th=[33817] 00:10:13.460 write: IOPS=5046, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1006msec); 0 zone resets 00:10:13.460 slat (nsec): min=1668, max=11711k, avg=99961.80, stdev=542845.93 00:10:13.460 clat (usec): min=1143, max=34394, avg=13870.91, stdev=7110.98 00:10:13.460 lat (usec): min=1152, max=34406, avg=13970.87, stdev=7160.57 00:10:13.460 clat percentiles (usec): 00:10:13.460 | 1.00th=[ 4178], 5.00th=[ 5276], 10.00th=[ 6915], 20.00th=[ 7767], 00:10:13.460 | 30.00th=[ 8455], 40.00th=[ 9503], 50.00th=[11994], 60.00th=[14615], 00:10:13.460 | 70.00th=[15270], 80.00th=[21365], 90.00th=[24773], 95.00th=[28443], 00:10:13.460 | 99.00th=[32637], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:10:13.460 | 99.99th=[34341] 00:10:13.460 bw ( KiB/s): min=19120, max=20472, per=19.60%, avg=19796.00, stdev=956.01, samples=2 00:10:13.460 iops : min= 4780, max= 5118, avg=4949.00, stdev=239.00, samples=2 00:10:13.460 lat (msec) : 2=0.09%, 4=0.22%, 10=38.07%, 20=45.72%, 50=15.90% 00:10:13.460 cpu : usr=3.08%, sys=5.67%, ctx=428, majf=0, minf=2 00:10:13.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:13.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.460 issued rwts: total=4608,5077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.460 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.460 job3: (groupid=0, jobs=1): err= 0: pid=3379255: Wed Nov 20 07:08:35 2024 00:10:13.461 read: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec) 00:10:13.461 slat (nsec): min=1012, max=6945.9k, avg=60879.20, stdev=436540.87 00:10:13.461 clat (usec): min=3242, max=14946, avg=8347.91, stdev=1803.81 00:10:13.461 lat (usec): min=3251, max=14978, avg=8408.79, stdev=1829.53 00:10:13.461 clat percentiles (usec): 00:10:13.461 | 1.00th=[ 5211], 5.00th=[ 5997], 10.00th=[ 6456], 20.00th=[ 6980], 00:10:13.461 | 30.00th=[ 7308], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8225], 00:10:13.461 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[12125], 00:10:13.461 | 99.00th=[13698], 99.50th=[14222], 99.90th=[14746], 99.95th=[14746], 00:10:13.461 | 99.99th=[15008] 00:10:13.461 write: IOPS=8192, BW=32.0MiB/s (33.6MB/s)(32.2MiB/1005msec); 0 zone resets 00:10:13.461 slat (nsec): min=1735, max=6993.3k, avg=54565.10, stdev=438217.62 00:10:13.461 clat (usec): min=1190, max=14841, avg=7153.17, stdev=1794.69 00:10:13.461 lat (usec): min=1199, max=14860, avg=7207.73, stdev=1821.22 00:10:13.461 clat percentiles (usec): 00:10:13.461 | 1.00th=[ 3687], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 5145], 00:10:13.461 | 30.00th=[ 6259], 40.00th=[ 6980], 50.00th=[ 7308], 60.00th=[ 7570], 00:10:13.461 | 70.00th=[ 7898], 80.00th=[ 8160], 90.00th=[10028], 95.00th=[10421], 00:10:13.461 | 99.00th=[11731], 99.50th=[12125], 99.90th=[13566], 99.95th=[14222], 00:10:13.461 | 99.99th=[14877] 00:10:13.461 bw ( KiB/s): min=32768, max=32768, per=32.44%, avg=32768.00, stdev= 0.00, samples=2 00:10:13.461 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:10:13.461 lat (msec) : 2=0.01%, 4=0.82%, 10=86.18%, 20=12.99% 00:10:13.461 cpu : usr=7.07%, sys=9.16%, ctx=385, majf=0, minf=1 00:10:13.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:13.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.461 issued rwts: total=8192,8233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.461 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.461 00:10:13.461 Run status group 0 (all jobs): 00:10:13.461 READ: bw=95.4MiB/s (100MB/s), 15.9MiB/s-31.8MiB/s (16.7MB/s-33.4MB/s), io=96.0MiB (101MB), run=1003-1006msec 00:10:13.461 WRITE: bw=98.6MiB/s (103MB/s), 16.2MiB/s-32.0MiB/s (17.0MB/s-33.6MB/s), io=99.2MiB (104MB), run=1003-1006msec 00:10:13.461 00:10:13.461 Disk stats (read/write): 00:10:13.461 nvme0n1: ios=6397/6656, merge=0/0, ticks=19886/15067, in_queue=34953, util=96.79% 00:10:13.461 nvme0n2: ios=3109/3584, merge=0/0, ticks=34145/38129, in_queue=72274, util=91.34% 00:10:13.461 nvme0n3: ios=4096/4143, merge=0/0, ticks=47563/53106, in_queue=100669, util=88.09% 00:10:13.461 nvme0n4: ios=6693/6984, merge=0/0, ticks=53109/47134, in_queue=100243, util=97.97% 00:10:13.461 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:13.461 [global] 00:10:13.461 thread=1 00:10:13.461 invalidate=1 00:10:13.461 rw=randwrite 00:10:13.461 time_based=1 00:10:13.461 runtime=1 00:10:13.461 ioengine=libaio 00:10:13.461 direct=1 00:10:13.461 bs=4096 00:10:13.461 iodepth=128 00:10:13.461 norandommap=0 00:10:13.461 numjobs=1 00:10:13.461 00:10:13.461 verify_dump=1 00:10:13.461 verify_backlog=512 00:10:13.461 verify_state_save=0 00:10:13.461 do_verify=1 00:10:13.461 verify=crc32c-intel 00:10:13.461 [job0] 00:10:13.461 filename=/dev/nvme0n1 00:10:13.461 [job1] 00:10:13.461 filename=/dev/nvme0n2 00:10:13.461 [job2] 00:10:13.461 filename=/dev/nvme0n3 00:10:13.461 [job3] 00:10:13.461 filename=/dev/nvme0n4 00:10:13.461 Could not set queue depth (nvme0n1) 00:10:13.461 Could not set queue depth (nvme0n2) 00:10:13.461 Could not set queue depth (nvme0n3) 00:10:13.461 Could not set queue depth (nvme0n4) 00:10:13.723 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.723 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.723 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.723 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.723 fio-3.35 00:10:13.723 Starting 4 threads 00:10:15.124 00:10:15.124 job0: (groupid=0, jobs=1): err= 0: pid=3379717: Wed Nov 20 07:08:36 2024 00:10:15.124 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:10:15.124 slat (nsec): min=970, max=17234k, avg=82084.99, stdev=694748.82 00:10:15.124 clat (usec): min=3127, max=35498, avg=10560.29, stdev=4330.47 00:10:15.124 lat (usec): min=3132, max=35527, avg=10642.37, stdev=4384.04 00:10:15.124 clat percentiles (usec): 00:10:15.124 | 1.00th=[ 4359], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 7439], 00:10:15.124 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8291], 60.00th=[10028], 00:10:15.124 | 70.00th=[11994], 80.00th=[14091], 90.00th=[17957], 95.00th=[18482], 00:10:15.124 | 99.00th=[22676], 99.50th=[23725], 99.90th=[24249], 99.95th=[29230], 00:10:15.124 | 99.99th=[35390] 00:10:15.124 write: IOPS=6909, BW=27.0MiB/s (28.3MB/s)(27.1MiB/1005msec); 0 zone resets 00:10:15.124 slat (nsec): min=1651, max=8789.3k, avg=60201.56, stdev=419365.43 00:10:15.124 clat (usec): min=1142, max=63152, avg=8202.33, stdev=6462.14 00:10:15.124 lat (usec): min=1153, max=63161, avg=8262.53, stdev=6506.85 00:10:15.124 clat percentiles (usec): 00:10:15.124 | 1.00th=[ 2573], 5.00th=[ 4015], 10.00th=[ 4490], 20.00th=[ 5604], 00:10:15.124 | 30.00th=[ 6587], 40.00th=[ 6849], 50.00th=[ 7242], 60.00th=[ 7570], 00:10:15.124 | 70.00th=[ 7767], 80.00th=[ 8160], 90.00th=[10028], 95.00th=[12387], 00:10:15.124 | 99.00th=[49546], 99.50th=[53740], 99.90th=[57934], 99.95th=[60031], 00:10:15.124 | 99.99th=[63177] 00:10:15.124 bw ( KiB/s): min=26896, max=27632, per=27.14%, avg=27264.00, stdev=520.43, samples=2 00:10:15.124 iops : min= 6724, max= 6908, avg=6816.00, stdev=130.11, samples=2 00:10:15.124 lat (msec) : 2=0.15%, 4=2.58%, 10=72.10%, 20=21.93%, 50=2.78% 00:10:15.124 lat (msec) : 100=0.46% 00:10:15.124 cpu : usr=4.28%, sys=7.67%, ctx=545, majf=0, minf=2 00:10:15.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:15.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.124 issued rwts: total=6656,6944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.124 job1: (groupid=0, jobs=1): err= 0: pid=3379727: Wed Nov 20 07:08:36 2024 00:10:15.124 read: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec) 00:10:15.124 slat (nsec): min=898, max=43999k, avg=78977.65, stdev=669525.47 00:10:15.124 clat (usec): min=2804, max=53260, avg=9729.72, stdev=4702.83 00:10:15.124 lat (usec): min=2811, max=53268, avg=9808.70, stdev=4733.51 00:10:15.124 clat percentiles (usec): 00:10:15.124 | 1.00th=[ 4113], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 8356], 00:10:15.124 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:10:15.124 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10814], 95.00th=[11994], 00:10:15.124 | 99.00th=[21627], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:10:15.124 | 99.99th=[53216] 00:10:15.124 write: IOPS=6888, BW=26.9MiB/s (28.2MB/s)(26.9MiB/1001msec); 0 zone resets 00:10:15.124 slat (nsec): min=1483, max=5534.3k, avg=65534.71, stdev=341035.03 00:10:15.124 clat (usec): min=714, max=52396, avg=8985.78, stdev=4960.28 00:10:15.124 lat (usec): min=1123, max=52398, avg=9051.31, stdev=4958.12 00:10:15.124 clat percentiles (usec): 00:10:15.124 | 1.00th=[ 2737], 5.00th=[ 4359], 10.00th=[ 5669], 20.00th=[ 6915], 00:10:15.124 | 30.00th=[ 7701], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9110], 00:10:15.124 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[12911], 00:10:15.124 | 99.00th=[23987], 99.50th=[51643], 99.90th=[52167], 99.95th=[52167], 00:10:15.124 | 99.99th=[52167] 00:10:15.124 bw ( KiB/s): min=28672, max=28672, per=28.54%, avg=28672.00, stdev= 0.00, samples=1 00:10:15.124 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:10:15.124 lat (usec) : 750=0.01% 00:10:15.124 lat (msec) : 2=0.20%, 4=2.08%, 10=76.94%, 20=19.00%, 50=0.84% 00:10:15.124 lat (msec) : 100=0.93% 00:10:15.124 cpu : usr=2.60%, sys=4.10%, ctx=756, majf=0, minf=1 00:10:15.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:15.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.125 issued rwts: total=6656,6895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.125 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.125 job2: (groupid=0, jobs=1): err= 0: pid=3379745: Wed Nov 20 07:08:36 2024 00:10:15.125 read: IOPS=4304, BW=16.8MiB/s (17.6MB/s)(17.2MiB/1020msec) 00:10:15.125 slat (nsec): min=1017, max=44166k, avg=116280.70, stdev=1055537.83 00:10:15.125 clat (usec): min=972, max=69880, avg=14859.97, stdev=12691.17 00:10:15.125 lat (usec): min=2910, max=72881, avg=14976.25, stdev=12763.14 00:10:15.125 clat percentiles (usec): 00:10:15.125 | 1.00th=[ 5604], 5.00th=[ 7242], 10.00th=[ 7832], 20.00th=[ 8455], 00:10:15.125 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[11207], 00:10:15.125 | 70.00th=[12780], 80.00th=[15533], 90.00th=[31327], 95.00th=[51643], 00:10:15.125 | 99.00th=[61604], 99.50th=[63701], 99.90th=[69731], 99.95th=[69731], 00:10:15.125 | 99.99th=[69731] 00:10:15.125 write: IOPS=4517, BW=17.6MiB/s (18.5MB/s)(18.0MiB/1020msec); 0 zone resets 00:10:15.125 slat (nsec): min=1518, max=10493k, avg=85518.82, stdev=554163.84 00:10:15.125 clat (usec): min=1167, max=74081, avg=13860.64, stdev=15182.11 00:10:15.125 lat (usec): min=1178, max=74091, avg=13946.16, stdev=15279.60 00:10:15.125 clat percentiles (usec): 00:10:15.125 | 1.00th=[ 2245], 5.00th=[ 3916], 10.00th=[ 5407], 20.00th=[ 7242], 00:10:15.125 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:10:15.125 | 70.00th=[10159], 80.00th=[12256], 90.00th=[28181], 95.00th=[61080], 00:10:15.125 | 99.00th=[65799], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:10:15.125 | 99.99th=[73925] 00:10:15.125 bw ( KiB/s): min=11664, max=25200, per=18.35%, avg=18432.00, stdev=9571.40, samples=2 00:10:15.125 iops : min= 2916, max= 6300, avg=4608.00, stdev=2392.85, samples=2 00:10:15.125 lat (usec) : 1000=0.01% 00:10:15.125 lat (msec) : 2=0.33%, 4=2.57%, 10=56.40%, 20=28.85%, 50=5.25% 00:10:15.125 lat (msec) : 100=6.60% 00:10:15.125 cpu : usr=3.24%, sys=5.40%, ctx=373, majf=0, minf=1 00:10:15.125 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:15.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.125 issued rwts: total=4391,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.125 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.125 job3: (groupid=0, jobs=1): err= 0: pid=3379752: Wed Nov 20 07:08:36 2024 00:10:15.125 read: IOPS=6957, BW=27.2MiB/s (28.5MB/s)(27.4MiB/1007msec) 00:10:15.125 slat (nsec): min=956, max=9271.3k, avg=72367.04, stdev=494876.25 00:10:15.125 clat (usec): min=3779, max=23609, avg=9371.15, stdev=2223.01 00:10:15.125 lat (usec): min=3787, max=25013, avg=9443.52, stdev=2265.57 00:10:15.125 clat percentiles (usec): 00:10:15.125 | 1.00th=[ 5211], 5.00th=[ 6456], 10.00th=[ 7111], 20.00th=[ 7832], 00:10:15.125 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9634], 00:10:15.125 | 70.00th=[10290], 80.00th=[10945], 90.00th=[11731], 95.00th=[13304], 00:10:15.125 | 99.00th=[16909], 99.50th=[17957], 99.90th=[23725], 99.95th=[23725], 00:10:15.125 | 99.99th=[23725] 00:10:15.125 write: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1007msec); 0 zone resets 00:10:15.125 slat (nsec): min=1604, max=9471.3k, avg=62230.18, stdev=350180.20 00:10:15.125 clat (usec): min=3795, max=23648, avg=8614.26, stdev=2070.67 00:10:15.125 lat (usec): min=3799, max=23658, avg=8676.49, stdev=2090.58 00:10:15.125 clat percentiles (usec): 00:10:15.125 | 1.00th=[ 4555], 5.00th=[ 5604], 10.00th=[ 6521], 20.00th=[ 7177], 00:10:15.125 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8717], 00:10:15.125 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[11076], 95.00th=[12387], 00:10:15.125 | 99.00th=[14877], 99.50th=[15401], 99.90th=[23725], 99.95th=[23725], 00:10:15.125 | 99.99th=[23725] 00:10:15.125 bw ( KiB/s): min=24526, max=32768, per=28.52%, avg=28647.00, stdev=5827.97, samples=2 00:10:15.125 iops : min= 6131, max= 8192, avg=7161.50, stdev=1457.35, samples=2 00:10:15.125 lat (msec) : 4=0.28%, 10=70.75%, 20=28.84%, 50=0.13% 00:10:15.125 cpu : usr=4.47%, sys=5.96%, ctx=843, majf=0, minf=2 00:10:15.125 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:15.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.125 issued rwts: total=7006,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.125 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.125 00:10:15.125 Run status group 0 (all jobs): 00:10:15.125 READ: bw=94.6MiB/s (99.2MB/s), 16.8MiB/s-27.2MiB/s (17.6MB/s-28.5MB/s), io=96.5MiB (101MB), run=1001-1020msec 00:10:15.125 WRITE: bw=98.1MiB/s (103MB/s), 17.6MiB/s-27.8MiB/s (18.5MB/s-29.2MB/s), io=100MiB (105MB), run=1001-1020msec 00:10:15.125 00:10:15.125 Disk stats (read/write): 00:10:15.125 nvme0n1: ios=5217/5632, merge=0/0, ticks=55892/46474, in_queue=102366, util=99.60% 00:10:15.125 nvme0n2: ios=5617/5632, merge=0/0, ticks=23468/20763, in_queue=44231, util=97.25% 00:10:15.125 nvme0n3: ios=4149/4378, merge=0/0, ticks=39934/44057, in_queue=83991, util=100.00% 00:10:15.125 nvme0n4: ios=5673/5943, merge=0/0, ticks=32936/28991, in_queue=61927, util=96.16% 00:10:15.125 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:15.125 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3380008 00:10:15.125 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:15.125 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:15.125 [global] 00:10:15.125 thread=1 00:10:15.125 invalidate=1 00:10:15.125 rw=read 00:10:15.125 time_based=1 00:10:15.125 runtime=10 00:10:15.125 ioengine=libaio 00:10:15.125 direct=1 00:10:15.125 bs=4096 00:10:15.125 iodepth=1 00:10:15.125 norandommap=1 00:10:15.125 numjobs=1 00:10:15.125 00:10:15.125 [job0] 00:10:15.125 filename=/dev/nvme0n1 00:10:15.125 [job1] 00:10:15.125 filename=/dev/nvme0n2 00:10:15.125 [job2] 00:10:15.125 filename=/dev/nvme0n3 00:10:15.125 [job3] 00:10:15.125 filename=/dev/nvme0n4 00:10:15.125 Could not set queue depth (nvme0n1) 00:10:15.125 Could not set queue depth (nvme0n2) 00:10:15.125 Could not set queue depth (nvme0n3) 00:10:15.125 Could not set queue depth (nvme0n4) 00:10:15.386 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.386 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.386 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.386 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.386 fio-3.35 00:10:15.386 Starting 4 threads 00:10:17.936 07:08:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:18.197 07:08:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:18.197 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=13381632, buflen=4096 00:10:18.197 fio: pid=3380250, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:18.197 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10317824, buflen=4096 00:10:18.197 fio: pid=3380244, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:18.197 07:08:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.197 07:08:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:18.458 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=294912, buflen=4096 00:10:18.458 fio: pid=3380221, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:18.458 07:08:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.458 07:08:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:18.721 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12296192, buflen=4096 00:10:18.721 fio: pid=3380225, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:18.721 07:08:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.721 07:08:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:18.721 00:10:18.721 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3380221: Wed Nov 20 07:08:40 2024 00:10:18.721 read: IOPS=24, BW=96.5KiB/s (98.8kB/s)(288KiB/2984msec) 00:10:18.721 slat (usec): min=25, max=222, avg=28.86, stdev=22.98 00:10:18.721 clat (usec): min=1021, max=42112, avg=41109.38, stdev=4811.35 00:10:18.721 lat (usec): min=1058, max=42138, avg=41138.28, stdev=4810.40 00:10:18.721 clat percentiles (usec): 00:10:18.721 | 1.00th=[ 1020], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:18.721 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:18.721 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:18.721 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:18.721 | 99.99th=[42206] 00:10:18.721 bw ( KiB/s): min= 96, max= 104, per=0.87%, avg=97.60, stdev= 3.58, samples=5 00:10:18.721 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:10:18.721 lat (msec) : 2=1.37%, 50=97.26% 00:10:18.721 cpu : usr=0.00%, sys=0.10%, ctx=74, majf=0, minf=2 00:10:18.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.721 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.721 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.721 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3380225: Wed Nov 20 07:08:40 2024 00:10:18.721 read: IOPS=948, BW=3793KiB/s (3884kB/s)(11.7MiB/3166msec) 00:10:18.721 slat (usec): min=6, max=17317, avg=39.21, stdev=430.46 00:10:18.721 clat (usec): min=628, max=8933, avg=1001.95, stdev=195.28 00:10:18.721 lat (usec): min=653, max=18283, avg=1041.16, stdev=471.87 00:10:18.721 clat percentiles (usec): 00:10:18.721 | 1.00th=[ 734], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 930], 00:10:18.721 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:10:18.721 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1123], 00:10:18.721 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1532], 99.95th=[ 5997], 00:10:18.721 | 99.99th=[ 8979] 00:10:18.721 bw ( KiB/s): min= 3561, max= 3968, per=34.18%, avg=3826.83, stdev=143.24, samples=6 00:10:18.721 iops : min= 890, max= 992, avg=956.67, stdev=35.90, samples=6 00:10:18.721 lat (usec) : 750=1.50%, 1000=43.12% 00:10:18.721 lat (msec) : 2=55.28%, 10=0.07% 00:10:18.721 cpu : usr=1.04%, sys=2.78%, ctx=3008, majf=0, minf=2 00:10:18.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.721 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.721 issued rwts: total=3003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.721 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3380244: Wed Nov 20 07:08:40 2024 00:10:18.721 read: IOPS=901, BW=3605KiB/s (3692kB/s)(9.84MiB/2795msec) 00:10:18.721 slat (nsec): min=3559, max=63770, avg=26324.58, stdev=4724.66 00:10:18.721 clat (usec): min=371, max=5684, avg=1068.50, stdev=192.68 00:10:18.721 lat (usec): min=381, max=5701, avg=1094.82, stdev=193.82 00:10:18.721 clat percentiles (usec): 00:10:18.721 | 1.00th=[ 562], 5.00th=[ 709], 10.00th=[ 898], 20.00th=[ 1012], 00:10:18.721 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1123], 00:10:18.721 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1205], 00:10:18.721 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 3720], 99.95th=[ 3884], 00:10:18.721 | 99.99th=[ 5669] 00:10:18.721 bw ( KiB/s): min= 3504, max= 3704, per=31.88%, avg=3569.60, stdev=80.28, samples=5 00:10:18.721 iops : min= 876, max= 926, avg=892.40, stdev=20.07, samples=5 00:10:18.721 lat (usec) : 500=0.36%, 750=6.55%, 1000=12.02% 00:10:18.721 lat (msec) : 2=80.87%, 4=0.12%, 10=0.04% 00:10:18.721 cpu : usr=1.32%, sys=3.79%, ctx=2521, majf=0, minf=1 00:10:18.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.721 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.721 issued rwts: total=2520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.721 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3380250: Wed Nov 20 07:08:40 2024 00:10:18.721 read: IOPS=1235, BW=4941KiB/s (5059kB/s)(12.8MiB/2645msec) 00:10:18.721 slat (nsec): min=6350, max=63376, avg=24216.18, stdev=8245.86 00:10:18.721 clat (usec): min=348, max=41989, avg=771.83, stdev=1606.14 00:10:18.721 lat (usec): min=375, max=42016, avg=796.05, stdev=1606.44 00:10:18.721 clat percentiles (usec): 00:10:18.721 | 1.00th=[ 465], 5.00th=[ 545], 10.00th=[ 578], 20.00th=[ 627], 00:10:18.721 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[ 717], 60.00th=[ 742], 00:10:18.721 | 70.00th=[ 766], 80.00th=[ 791], 90.00th=[ 824], 95.00th=[ 848], 00:10:18.721 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[41681], 99.95th=[42206], 00:10:18.721 | 99.99th=[42206] 00:10:18.721 bw ( KiB/s): min= 4496, max= 5472, per=46.65%, avg=5222.40, stdev=413.43, samples=5 00:10:18.721 iops : min= 1124, max= 1368, avg=1305.60, stdev=103.36, samples=5 00:10:18.721 lat (usec) : 500=2.29%, 750=59.94%, 1000=37.52% 00:10:18.721 lat (msec) : 2=0.06%, 50=0.15% 00:10:18.721 cpu : usr=1.66%, sys=4.77%, ctx=3269, majf=0, minf=2 00:10:18.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.721 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.721 issued rwts: total=3268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.721 00:10:18.721 Run status group 0 (all jobs): 00:10:18.721 READ: bw=10.9MiB/s (11.5MB/s), 96.5KiB/s-4941KiB/s (98.8kB/s-5059kB/s), io=34.6MiB (36.3MB), run=2645-3166msec 00:10:18.721 00:10:18.721 Disk stats (read/write): 00:10:18.721 nvme0n1: ios=69/0, merge=0/0, ticks=2836/0, in_queue=2836, util=94.76% 00:10:18.721 nvme0n2: ios=2945/0, merge=0/0, ticks=2895/0, in_queue=2895, util=94.45% 00:10:18.721 nvme0n3: ios=2348/0, merge=0/0, ticks=2277/0, in_queue=2277, util=95.99% 00:10:18.721 nvme0n4: ios=3266/0, merge=0/0, ticks=2171/0, in_queue=2171, util=96.42% 00:10:18.721 07:08:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.721 07:08:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:18.983 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.983 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:19.243 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.243 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:19.503 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.503 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:19.503 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:19.503 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3380008 00:10:19.503 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:19.503 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:19.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.763 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:19.763 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:19.763 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:19.763 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.763 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:19.763 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.763 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:19.763 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:19.763 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:19.763 nvmf hotplug test: fio failed as expected 00:10:19.763 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:19.763 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:19.763 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:19.763 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:19.763 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:19.763 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:19.763 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:19.763 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:19.763 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:19.763 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:19.763 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.763 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:19.763 rmmod nvme_tcp 00:10:20.024 rmmod nvme_fabrics 00:10:20.024 rmmod nvme_keyring 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3376473 ']' 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3376473 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3376473 ']' 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3376473 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3376473 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3376473' 00:10:20.024 killing process with pid 3376473 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3376473 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3376473 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.024 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.571 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:22.571 00:10:22.571 real 0m29.576s 00:10:22.571 user 2m35.999s 00:10:22.571 sys 0m9.672s 00:10:22.571 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:22.571 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.571 ************************************ 00:10:22.571 END TEST nvmf_fio_target 00:10:22.571 ************************************ 00:10:22.571 07:08:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:22.571 07:08:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:22.571 07:08:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:22.571 07:08:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.571 ************************************ 00:10:22.571 START TEST nvmf_bdevio 00:10:22.571 ************************************ 00:10:22.571 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:22.571 * Looking for test storage... 00:10:22.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.571 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:22.571 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:22.571 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:22.571 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:22.571 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.571 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:22.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.572 --rc genhtml_branch_coverage=1 00:10:22.572 --rc genhtml_function_coverage=1 00:10:22.572 --rc genhtml_legend=1 00:10:22.572 --rc geninfo_all_blocks=1 00:10:22.572 --rc geninfo_unexecuted_blocks=1 00:10:22.572 00:10:22.572 ' 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:22.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.572 --rc genhtml_branch_coverage=1 00:10:22.572 --rc genhtml_function_coverage=1 00:10:22.572 --rc genhtml_legend=1 00:10:22.572 --rc geninfo_all_blocks=1 00:10:22.572 --rc geninfo_unexecuted_blocks=1 00:10:22.572 00:10:22.572 ' 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:22.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.572 --rc genhtml_branch_coverage=1 00:10:22.572 --rc genhtml_function_coverage=1 00:10:22.572 --rc genhtml_legend=1 00:10:22.572 --rc geninfo_all_blocks=1 00:10:22.572 --rc geninfo_unexecuted_blocks=1 00:10:22.572 00:10:22.572 ' 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:22.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.572 --rc genhtml_branch_coverage=1 00:10:22.572 --rc genhtml_function_coverage=1 00:10:22.572 --rc genhtml_legend=1 00:10:22.572 --rc geninfo_all_blocks=1 00:10:22.572 --rc geninfo_unexecuted_blocks=1 00:10:22.572 00:10:22.572 ' 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.572 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.573 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.573 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:22.573 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:22.573 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.573 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:22.573 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:22.573 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:22.573 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.573 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.573 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.573 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:22.573 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:22.573 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:22.573 07:08:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:30.716 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:30.716 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.716 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:30.717 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:30.717 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:30.717 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:30.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:10:30.717 00:10:30.717 --- 10.0.0.2 ping statistics --- 00:10:30.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.717 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:10:30.717 00:10:30.717 --- 10.0.0.1 ping statistics --- 00:10:30.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.717 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3385561 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3385561 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3385561 ']' 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:30.717 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.717 [2024-11-20 07:08:52.195112] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:10:30.717 [2024-11-20 07:08:52.195191] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.717 [2024-11-20 07:08:52.296609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.717 [2024-11-20 07:08:52.348593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.717 [2024-11-20 07:08:52.348647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.717 [2024-11-20 07:08:52.348656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.717 [2024-11-20 07:08:52.348663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.717 [2024-11-20 07:08:52.348670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.717 [2024-11-20 07:08:52.350725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:30.717 [2024-11-20 07:08:52.350887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:30.717 [2024-11-20 07:08:52.351046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.717 [2024-11-20 07:08:52.351046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.979 [2024-11-20 07:08:53.078917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.979 Malloc0 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.979 [2024-11-20 07:08:53.155275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:30.979 { 00:10:30.979 "params": { 00:10:30.979 "name": "Nvme$subsystem", 00:10:30.979 "trtype": "$TEST_TRANSPORT", 00:10:30.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:30.979 "adrfam": "ipv4", 00:10:30.979 "trsvcid": "$NVMF_PORT", 00:10:30.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:30.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:30.979 "hdgst": ${hdgst:-false}, 00:10:30.979 "ddgst": ${ddgst:-false} 00:10:30.979 }, 00:10:30.979 "method": "bdev_nvme_attach_controller" 00:10:30.979 } 00:10:30.979 EOF 00:10:30.979 )") 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:30.979 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:30.979 "params": { 00:10:30.979 "name": "Nvme1", 00:10:30.979 "trtype": "tcp", 00:10:30.979 "traddr": "10.0.0.2", 00:10:30.979 "adrfam": "ipv4", 00:10:30.979 "trsvcid": "4420", 00:10:30.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:30.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:30.979 "hdgst": false, 00:10:30.979 "ddgst": false 00:10:30.979 }, 00:10:30.979 "method": "bdev_nvme_attach_controller" 00:10:30.979 }' 00:10:30.979 [2024-11-20 07:08:53.225149] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:10:30.979 [2024-11-20 07:08:53.225224] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385610 ] 00:10:31.240 [2024-11-20 07:08:53.319805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:31.240 [2024-11-20 07:08:53.378287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.240 [2024-11-20 07:08:53.378449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.240 [2024-11-20 07:08:53.378449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.501 I/O targets: 00:10:31.501 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:31.501 00:10:31.501 00:10:31.501 CUnit - A unit testing framework for C - Version 2.1-3 00:10:31.501 http://cunit.sourceforge.net/ 00:10:31.501 00:10:31.501 00:10:31.501 Suite: bdevio tests on: Nvme1n1 00:10:31.501 Test: blockdev write read block ...passed 00:10:31.501 Test: blockdev write zeroes read block ...passed 00:10:31.502 Test: blockdev write zeroes read no split ...passed 00:10:31.502 Test: blockdev write zeroes read split ...passed 00:10:31.502 Test: blockdev write zeroes read split partial ...passed 00:10:31.502 Test: blockdev reset ...[2024-11-20 07:08:53.678528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:31.502 [2024-11-20 07:08:53.678622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b4970 (9): Bad file descriptor 00:10:31.502 [2024-11-20 07:08:53.735377] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:31.502 passed 00:10:31.502 Test: blockdev write read 8 blocks ...passed 00:10:31.502 Test: blockdev write read size > 128k ...passed 00:10:31.502 Test: blockdev write read invalid size ...passed 00:10:31.763 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.763 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.763 Test: blockdev write read max offset ...passed 00:10:31.763 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.763 Test: blockdev writev readv 8 blocks ...passed 00:10:31.763 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.763 Test: blockdev writev readv block ...passed 00:10:31.763 Test: blockdev writev readv size > 128k ...passed 00:10:31.763 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.763 Test: blockdev comparev and writev ...[2024-11-20 07:08:53.915719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.763 [2024-11-20 07:08:53.915770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:31.763 [2024-11-20 07:08:53.915786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.763 [2024-11-20 07:08:53.915795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:31.763 [2024-11-20 07:08:53.916218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.763 [2024-11-20 07:08:53.916232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:31.763 [2024-11-20 07:08:53.916246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.763 [2024-11-20 07:08:53.916256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:31.763 [2024-11-20 07:08:53.916677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.763 [2024-11-20 07:08:53.916690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:31.763 [2024-11-20 07:08:53.916704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.763 [2024-11-20 07:08:53.916712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:31.763 [2024-11-20 07:08:53.917112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.763 [2024-11-20 07:08:53.917125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:31.763 [2024-11-20 07:08:53.917139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:31.763 [2024-11-20 07:08:53.917147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:31.763 passed 00:10:31.763 Test: blockdev nvme passthru rw ...passed 00:10:31.763 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:08:54.002767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:31.763 [2024-11-20 07:08:54.002785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:31.763 [2024-11-20 07:08:54.003054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:31.763 [2024-11-20 07:08:54.003065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:31.763 [2024-11-20 07:08:54.003329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:31.763 [2024-11-20 07:08:54.003341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:31.763 [2024-11-20 07:08:54.003602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:31.763 [2024-11-20 07:08:54.003614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:31.763 passed 00:10:31.763 Test: blockdev nvme admin passthru ...passed 00:10:32.024 Test: blockdev copy ...passed 00:10:32.024 00:10:32.024 Run Summary: Type Total Ran Passed Failed Inactive 00:10:32.024 suites 1 1 n/a 0 0 00:10:32.024 tests 23 23 23 0 0 00:10:32.024 asserts 152 152 152 0 n/a 00:10:32.024 00:10:32.024 Elapsed time = 1.036 seconds 00:10:32.024 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.024 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.024 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.024 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.024 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:32.024 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:32.024 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:32.024 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:32.024 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:32.024 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:32.024 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:32.024 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:32.024 rmmod nvme_tcp 00:10:32.024 rmmod nvme_fabrics 00:10:32.024 rmmod nvme_keyring 00:10:32.024 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.024 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:32.025 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:32.025 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3385561 ']' 00:10:32.025 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3385561 00:10:32.025 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3385561 ']' 00:10:32.025 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3385561 00:10:32.025 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:32.025 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:32.025 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3385561 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3385561' 00:10:32.286 killing process with pid 3385561 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3385561 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3385561 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.286 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.836 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:34.836 00:10:34.836 real 0m12.178s 00:10:34.836 user 0m12.797s 00:10:34.836 sys 0m6.236s 00:10:34.836 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:34.836 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:34.836 ************************************ 00:10:34.836 END TEST nvmf_bdevio 00:10:34.836 ************************************ 00:10:34.836 07:08:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:34.836 00:10:34.836 real 5m3.787s 00:10:34.836 user 11m41.320s 00:10:34.836 sys 1m50.028s 00:10:34.836 07:08:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:34.836 07:08:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.836 ************************************ 00:10:34.836 END TEST nvmf_target_core 00:10:34.836 ************************************ 00:10:34.836 07:08:56 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:34.836 07:08:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:34.836 07:08:56 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:34.837 07:08:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:34.837 ************************************ 00:10:34.837 START TEST nvmf_target_extra 00:10:34.837 ************************************ 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:34.837 * Looking for test storage... 00:10:34.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:34.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.837 --rc genhtml_branch_coverage=1 00:10:34.837 --rc genhtml_function_coverage=1 00:10:34.837 --rc genhtml_legend=1 00:10:34.837 --rc geninfo_all_blocks=1 00:10:34.837 --rc geninfo_unexecuted_blocks=1 00:10:34.837 00:10:34.837 ' 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:34.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.837 --rc genhtml_branch_coverage=1 00:10:34.837 --rc genhtml_function_coverage=1 00:10:34.837 --rc genhtml_legend=1 00:10:34.837 --rc geninfo_all_blocks=1 00:10:34.837 --rc geninfo_unexecuted_blocks=1 00:10:34.837 00:10:34.837 ' 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:34.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.837 --rc genhtml_branch_coverage=1 00:10:34.837 --rc genhtml_function_coverage=1 00:10:34.837 --rc genhtml_legend=1 00:10:34.837 --rc geninfo_all_blocks=1 00:10:34.837 --rc geninfo_unexecuted_blocks=1 00:10:34.837 00:10:34.837 ' 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:34.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.837 --rc genhtml_branch_coverage=1 00:10:34.837 --rc genhtml_function_coverage=1 00:10:34.837 --rc genhtml_legend=1 00:10:34.837 --rc geninfo_all_blocks=1 00:10:34.837 --rc geninfo_unexecuted_blocks=1 00:10:34.837 00:10:34.837 ' 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.837 07:08:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:34.838 07:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:34.838 ************************************ 00:10:34.838 START TEST nvmf_example 00:10:34.838 ************************************ 00:10:34.838 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:35.100 * Looking for test storage... 00:10:35.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:35.100 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:35.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.101 --rc genhtml_branch_coverage=1 00:10:35.101 --rc genhtml_function_coverage=1 00:10:35.101 --rc genhtml_legend=1 00:10:35.101 --rc geninfo_all_blocks=1 00:10:35.101 --rc geninfo_unexecuted_blocks=1 00:10:35.101 00:10:35.101 ' 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:35.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.101 --rc genhtml_branch_coverage=1 00:10:35.101 --rc genhtml_function_coverage=1 00:10:35.101 --rc genhtml_legend=1 00:10:35.101 --rc geninfo_all_blocks=1 00:10:35.101 --rc geninfo_unexecuted_blocks=1 00:10:35.101 00:10:35.101 ' 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:35.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.101 --rc genhtml_branch_coverage=1 00:10:35.101 --rc genhtml_function_coverage=1 00:10:35.101 --rc genhtml_legend=1 00:10:35.101 --rc geninfo_all_blocks=1 00:10:35.101 --rc geninfo_unexecuted_blocks=1 00:10:35.101 00:10:35.101 ' 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:35.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.101 --rc genhtml_branch_coverage=1 00:10:35.101 --rc genhtml_function_coverage=1 00:10:35.101 --rc genhtml_legend=1 00:10:35.101 --rc geninfo_all_blocks=1 00:10:35.101 --rc geninfo_unexecuted_blocks=1 00:10:35.101 00:10:35.101 ' 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.101 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:35.102 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:35.102 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.102 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:35.102 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:35.102 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:35.102 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.102 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.102 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.102 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:35.102 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:35.102 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:35.102 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:43.251 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:43.251 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:43.251 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:43.251 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.251 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:43.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:10:43.252 00:10:43.252 --- 10.0.0.2 ping statistics --- 00:10:43.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.252 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:10:43.252 00:10:43.252 --- 10.0.0.1 ping statistics --- 00:10:43.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.252 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3390426 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3390426 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 3390426 ']' 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:43.252 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.514 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.776 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.776 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:43.776 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:43.776 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.776 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.776 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.776 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:43.776 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.776 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.776 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.776 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:43.776 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:53.784 Initializing NVMe Controllers 00:10:53.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:53.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:53.784 Initialization complete. Launching workers. 00:10:53.784 ======================================================== 00:10:53.784 Latency(us) 00:10:53.784 Device Information : IOPS MiB/s Average min max 00:10:53.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18898.26 73.82 3386.63 624.78 19309.19 00:10:53.784 ======================================================== 00:10:53.784 Total : 18898.26 73.82 3386.63 624.78 19309.19 00:10:53.784 00:10:53.784 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:53.784 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:53.784 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:53.784 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:53.784 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:53.784 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:53.784 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:53.784 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.046 rmmod nvme_tcp 00:10:54.046 rmmod nvme_fabrics 00:10:54.046 rmmod nvme_keyring 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3390426 ']' 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3390426 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 3390426 ']' 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 3390426 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3390426 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3390426' 00:10:54.046 killing process with pid 3390426 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 3390426 00:10:54.046 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 3390426 00:10:54.046 nvmf threads initialize successfully 00:10:54.046 bdev subsystem init successfully 00:10:54.047 created a nvmf target service 00:10:54.047 create targets's poll groups done 00:10:54.047 all subsystems of target started 00:10:54.047 nvmf target is running 00:10:54.047 all subsystems of target stopped 00:10:54.047 destroy targets's poll groups done 00:10:54.047 destroyed the nvmf target service 00:10:54.047 bdev subsystem finish successfully 00:10:54.047 nvmf threads destroy successfully 00:10:54.047 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:54.047 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:54.047 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:54.047 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:54.047 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:54.047 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:54.047 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:54.047 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:54.047 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:54.047 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.047 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.047 07:09:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.596 00:10:56.596 real 0m21.412s 00:10:56.596 user 0m46.550s 00:10:56.596 sys 0m7.076s 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.596 ************************************ 00:10:56.596 END TEST nvmf_example 00:10:56.596 ************************************ 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:56.596 ************************************ 00:10:56.596 START TEST nvmf_filesystem 00:10:56.596 ************************************ 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:56.596 * Looking for test storage... 00:10:56.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:56.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.596 --rc genhtml_branch_coverage=1 00:10:56.596 --rc genhtml_function_coverage=1 00:10:56.596 --rc genhtml_legend=1 00:10:56.596 --rc geninfo_all_blocks=1 00:10:56.596 --rc geninfo_unexecuted_blocks=1 00:10:56.596 00:10:56.596 ' 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:56.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.596 --rc genhtml_branch_coverage=1 00:10:56.596 --rc genhtml_function_coverage=1 00:10:56.596 --rc genhtml_legend=1 00:10:56.596 --rc geninfo_all_blocks=1 00:10:56.596 --rc geninfo_unexecuted_blocks=1 00:10:56.596 00:10:56.596 ' 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:56.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.596 --rc genhtml_branch_coverage=1 00:10:56.596 --rc genhtml_function_coverage=1 00:10:56.596 --rc genhtml_legend=1 00:10:56.596 --rc geninfo_all_blocks=1 00:10:56.596 --rc geninfo_unexecuted_blocks=1 00:10:56.596 00:10:56.596 ' 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:56.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.596 --rc genhtml_branch_coverage=1 00:10:56.596 --rc genhtml_function_coverage=1 00:10:56.596 --rc genhtml_legend=1 00:10:56.596 --rc geninfo_all_blocks=1 00:10:56.596 --rc geninfo_unexecuted_blocks=1 00:10:56.596 00:10:56.596 ' 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:56.596 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:56.597 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:56.598 #define SPDK_CONFIG_H 00:10:56.598 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:56.598 #define SPDK_CONFIG_APPS 1 00:10:56.598 #define SPDK_CONFIG_ARCH native 00:10:56.598 #undef SPDK_CONFIG_ASAN 00:10:56.598 #undef SPDK_CONFIG_AVAHI 00:10:56.598 #undef SPDK_CONFIG_CET 00:10:56.598 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:56.598 #define SPDK_CONFIG_COVERAGE 1 00:10:56.598 #define SPDK_CONFIG_CROSS_PREFIX 00:10:56.598 #undef SPDK_CONFIG_CRYPTO 00:10:56.598 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:56.598 #undef SPDK_CONFIG_CUSTOMOCF 00:10:56.598 #undef SPDK_CONFIG_DAOS 00:10:56.598 #define SPDK_CONFIG_DAOS_DIR 00:10:56.598 #define SPDK_CONFIG_DEBUG 1 00:10:56.598 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:56.598 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:56.598 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:56.598 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:56.598 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:56.598 #undef SPDK_CONFIG_DPDK_UADK 00:10:56.598 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:56.598 #define SPDK_CONFIG_EXAMPLES 1 00:10:56.598 #undef SPDK_CONFIG_FC 00:10:56.598 #define SPDK_CONFIG_FC_PATH 00:10:56.598 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:56.598 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:56.598 #define SPDK_CONFIG_FSDEV 1 00:10:56.598 #undef SPDK_CONFIG_FUSE 00:10:56.598 #undef SPDK_CONFIG_FUZZER 00:10:56.598 #define SPDK_CONFIG_FUZZER_LIB 00:10:56.598 #undef SPDK_CONFIG_GOLANG 00:10:56.598 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:56.598 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:56.598 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:56.598 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:56.598 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:56.598 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:56.598 #undef SPDK_CONFIG_HAVE_LZ4 00:10:56.598 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:56.598 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:56.598 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:56.598 #define SPDK_CONFIG_IDXD 1 00:10:56.598 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:56.598 #undef SPDK_CONFIG_IPSEC_MB 00:10:56.598 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:56.598 #define SPDK_CONFIG_ISAL 1 00:10:56.598 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:56.598 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:56.598 #define SPDK_CONFIG_LIBDIR 00:10:56.598 #undef SPDK_CONFIG_LTO 00:10:56.598 #define SPDK_CONFIG_MAX_LCORES 128 00:10:56.598 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:56.598 #define SPDK_CONFIG_NVME_CUSE 1 00:10:56.598 #undef SPDK_CONFIG_OCF 00:10:56.598 #define SPDK_CONFIG_OCF_PATH 00:10:56.598 #define SPDK_CONFIG_OPENSSL_PATH 00:10:56.598 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:56.598 #define SPDK_CONFIG_PGO_DIR 00:10:56.598 #undef SPDK_CONFIG_PGO_USE 00:10:56.598 #define SPDK_CONFIG_PREFIX /usr/local 00:10:56.598 #undef SPDK_CONFIG_RAID5F 00:10:56.598 #undef SPDK_CONFIG_RBD 00:10:56.598 #define SPDK_CONFIG_RDMA 1 00:10:56.598 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:56.598 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:56.598 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:56.598 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:56.598 #define SPDK_CONFIG_SHARED 1 00:10:56.598 #undef SPDK_CONFIG_SMA 00:10:56.598 #define SPDK_CONFIG_TESTS 1 00:10:56.598 #undef SPDK_CONFIG_TSAN 00:10:56.598 #define SPDK_CONFIG_UBLK 1 00:10:56.598 #define SPDK_CONFIG_UBSAN 1 00:10:56.598 #undef SPDK_CONFIG_UNIT_TESTS 00:10:56.598 #undef SPDK_CONFIG_URING 00:10:56.598 #define SPDK_CONFIG_URING_PATH 00:10:56.598 #undef SPDK_CONFIG_URING_ZNS 00:10:56.598 #undef SPDK_CONFIG_USDT 00:10:56.598 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:56.598 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:56.598 #define SPDK_CONFIG_VFIO_USER 1 00:10:56.598 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:56.598 #define SPDK_CONFIG_VHOST 1 00:10:56.598 #define SPDK_CONFIG_VIRTIO 1 00:10:56.598 #undef SPDK_CONFIG_VTUNE 00:10:56.598 #define SPDK_CONFIG_VTUNE_DIR 00:10:56.598 #define SPDK_CONFIG_WERROR 1 00:10:56.598 #define SPDK_CONFIG_WPDK_DIR 00:10:56.598 #undef SPDK_CONFIG_XNVME 00:10:56.598 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:56.598 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:56.599 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:56.600 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3393677 ]] 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3393677 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.wd0iEo 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.wd0iEo/tests/target /tmp/spdk.wd0iEo 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=118829858816 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356509184 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10526650368 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64666886144 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678252544 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847934976 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871302656 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23367680 00:10:56.601 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677871616 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678256640 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=385024 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935634944 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935647232 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:56.602 * Looking for test storage... 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=118829858816 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=12741242880 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.602 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:56.863 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:56.863 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:56.863 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:56.863 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:56.863 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:56.863 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:56.863 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:56.863 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:56.863 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:56.863 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:56.863 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:56.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.864 --rc genhtml_branch_coverage=1 00:10:56.864 --rc genhtml_function_coverage=1 00:10:56.864 --rc genhtml_legend=1 00:10:56.864 --rc geninfo_all_blocks=1 00:10:56.864 --rc geninfo_unexecuted_blocks=1 00:10:56.864 00:10:56.864 ' 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:56.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.864 --rc genhtml_branch_coverage=1 00:10:56.864 --rc genhtml_function_coverage=1 00:10:56.864 --rc genhtml_legend=1 00:10:56.864 --rc geninfo_all_blocks=1 00:10:56.864 --rc geninfo_unexecuted_blocks=1 00:10:56.864 00:10:56.864 ' 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:56.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.864 --rc genhtml_branch_coverage=1 00:10:56.864 --rc genhtml_function_coverage=1 00:10:56.864 --rc genhtml_legend=1 00:10:56.864 --rc geninfo_all_blocks=1 00:10:56.864 --rc geninfo_unexecuted_blocks=1 00:10:56.864 00:10:56.864 ' 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:56.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.864 --rc genhtml_branch_coverage=1 00:10:56.864 --rc genhtml_function_coverage=1 00:10:56.864 --rc genhtml_legend=1 00:10:56.864 --rc geninfo_all_blocks=1 00:10:56.864 --rc geninfo_unexecuted_blocks=1 00:10:56.864 00:10:56.864 ' 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.864 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.865 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.865 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:56.865 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.865 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:56.865 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:56.865 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:56.865 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.865 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.865 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.865 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:56.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:56.865 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:56.865 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:56.865 07:09:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:05.003 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:05.003 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:05.003 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:05.003 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:05.003 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:05.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:11:05.003 00:11:05.004 --- 10.0.0.2 ping statistics --- 00:11:05.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.004 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:11:05.004 00:11:05.004 --- 10.0.0.1 ping statistics --- 00:11:05.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.004 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:05.004 ************************************ 00:11:05.004 START TEST nvmf_filesystem_no_in_capsule 00:11:05.004 ************************************ 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3397337 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3397337 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3397337 ']' 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:05.004 07:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.004 [2024-11-20 07:09:26.668902] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:11:05.004 [2024-11-20 07:09:26.668959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.004 [2024-11-20 07:09:26.768748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.004 [2024-11-20 07:09:26.822120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.004 [2024-11-20 07:09:26.822186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.004 [2024-11-20 07:09:26.822194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.004 [2024-11-20 07:09:26.822207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.004 [2024-11-20 07:09:26.822213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.004 [2024-11-20 07:09:26.824231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.004 [2024-11-20 07:09:26.824345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.004 [2024-11-20 07:09:26.824507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.004 [2024-11-20 07:09:26.824508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.266 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:05.266 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:05.266 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:05.266 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:05.266 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.266 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.266 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.527 [2024-11-20 07:09:27.548684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.527 Malloc1 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.527 [2024-11-20 07:09:27.701786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:05.527 { 00:11:05.527 "name": "Malloc1", 00:11:05.527 "aliases": [ 00:11:05.527 "a23801ea-fea8-4315-ab75-076026cde217" 00:11:05.527 ], 00:11:05.527 "product_name": "Malloc disk", 00:11:05.527 "block_size": 512, 00:11:05.527 "num_blocks": 1048576, 00:11:05.527 "uuid": "a23801ea-fea8-4315-ab75-076026cde217", 00:11:05.527 "assigned_rate_limits": { 00:11:05.527 "rw_ios_per_sec": 0, 00:11:05.527 "rw_mbytes_per_sec": 0, 00:11:05.527 "r_mbytes_per_sec": 0, 00:11:05.527 "w_mbytes_per_sec": 0 00:11:05.527 }, 00:11:05.527 "claimed": true, 00:11:05.527 "claim_type": "exclusive_write", 00:11:05.527 "zoned": false, 00:11:05.527 "supported_io_types": { 00:11:05.527 "read": true, 00:11:05.527 "write": true, 00:11:05.527 "unmap": true, 00:11:05.527 "flush": true, 00:11:05.527 "reset": true, 00:11:05.527 "nvme_admin": false, 00:11:05.527 "nvme_io": false, 00:11:05.527 "nvme_io_md": false, 00:11:05.527 "write_zeroes": true, 00:11:05.527 "zcopy": true, 00:11:05.527 "get_zone_info": false, 00:11:05.527 "zone_management": false, 00:11:05.527 "zone_append": false, 00:11:05.527 "compare": false, 00:11:05.527 "compare_and_write": false, 00:11:05.527 "abort": true, 00:11:05.527 "seek_hole": false, 00:11:05.527 "seek_data": false, 00:11:05.527 "copy": true, 00:11:05.527 "nvme_iov_md": false 00:11:05.527 }, 00:11:05.527 "memory_domains": [ 00:11:05.527 { 00:11:05.527 "dma_device_id": "system", 00:11:05.527 "dma_device_type": 1 00:11:05.527 }, 00:11:05.527 { 00:11:05.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.527 "dma_device_type": 2 00:11:05.527 } 00:11:05.527 ], 00:11:05.527 "driver_specific": {} 00:11:05.527 } 00:11:05.527 ]' 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:05.527 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:05.795 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:05.795 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:05.795 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:05.795 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:05.795 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:07.313 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:07.313 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:07.313 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.313 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:07.313 07:09:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:09.223 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:09.483 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:09.742 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:11.124 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:11.124 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:11.124 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:11.124 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:11.124 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.124 ************************************ 00:11:11.124 START TEST filesystem_ext4 00:11:11.124 ************************************ 00:11:11.124 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:11.124 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:11.124 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:11.124 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:11.124 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:11.125 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:11.125 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:11.125 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:11.125 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:11.125 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:11.125 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:11.125 mke2fs 1.47.0 (5-Feb-2023) 00:11:11.125 Discarding device blocks: 0/522240 done 00:11:11.125 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:11.125 Filesystem UUID: 24c9f035-daf0-46f4-a443-5e8488605b7f 00:11:11.125 Superblock backups stored on blocks: 00:11:11.125 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:11.125 00:11:11.125 Allocating group tables: 0/64 done 00:11:11.125 Writing inode tables: 0/64 done 00:11:13.665 Creating journal (8192 blocks): done 00:11:13.665 Writing superblocks and filesystem accounting information: 0/64 done 00:11:13.665 00:11:13.665 07:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:13.665 07:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:18.945 07:09:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3397337 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:18.945 00:11:18.945 real 0m8.058s 00:11:18.945 user 0m0.034s 00:11:18.945 sys 0m0.075s 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:18.945 ************************************ 00:11:18.945 END TEST filesystem_ext4 00:11:18.945 ************************************ 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.945 ************************************ 00:11:18.945 START TEST filesystem_btrfs 00:11:18.945 ************************************ 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:18.945 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:19.515 btrfs-progs v6.8.1 00:11:19.515 See https://btrfs.readthedocs.io for more information. 00:11:19.515 00:11:19.515 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:19.515 NOTE: several default settings have changed in version 5.15, please make sure 00:11:19.515 this does not affect your deployments: 00:11:19.515 - DUP for metadata (-m dup) 00:11:19.515 - enabled no-holes (-O no-holes) 00:11:19.515 - enabled free-space-tree (-R free-space-tree) 00:11:19.515 00:11:19.515 Label: (null) 00:11:19.515 UUID: a2c5cd95-5ca6-4ed4-8848-da5f08d39359 00:11:19.515 Node size: 16384 00:11:19.515 Sector size: 4096 (CPU page size: 4096) 00:11:19.515 Filesystem size: 510.00MiB 00:11:19.515 Block group profiles: 00:11:19.515 Data: single 8.00MiB 00:11:19.515 Metadata: DUP 32.00MiB 00:11:19.515 System: DUP 8.00MiB 00:11:19.515 SSD detected: yes 00:11:19.515 Zoned device: no 00:11:19.515 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:19.515 Checksum: crc32c 00:11:19.515 Number of devices: 1 00:11:19.515 Devices: 00:11:19.515 ID SIZE PATH 00:11:19.515 1 510.00MiB /dev/nvme0n1p1 00:11:19.515 00:11:19.515 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:19.515 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3397337 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:20.456 00:11:20.456 real 0m1.394s 00:11:20.456 user 0m0.035s 00:11:20.456 sys 0m0.116s 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:20.456 ************************************ 00:11:20.456 END TEST filesystem_btrfs 00:11:20.456 ************************************ 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.456 ************************************ 00:11:20.456 START TEST filesystem_xfs 00:11:20.456 ************************************ 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:20.456 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:20.456 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:20.456 = sectsz=512 attr=2, projid32bit=1 00:11:20.456 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:20.456 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:20.456 data = bsize=4096 blocks=130560, imaxpct=25 00:11:20.456 = sunit=0 swidth=0 blks 00:11:20.456 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:20.456 log =internal log bsize=4096 blocks=16384, version=2 00:11:20.456 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:20.456 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:21.397 Discarding blocks...Done. 00:11:21.397 07:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:21.397 07:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:23.306 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:23.306 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:23.306 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:23.306 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:23.306 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:23.307 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:23.307 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3397337 00:11:23.307 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:23.307 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:23.307 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:23.307 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:23.307 00:11:23.307 real 0m2.809s 00:11:23.307 user 0m0.021s 00:11:23.307 sys 0m0.085s 00:11:23.307 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:23.307 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:23.307 ************************************ 00:11:23.307 END TEST filesystem_xfs 00:11:23.307 ************************************ 00:11:23.307 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:23.567 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:23.567 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:23.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3397337 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3397337 ']' 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3397337 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3397337 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3397337' 00:11:23.828 killing process with pid 3397337 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 3397337 00:11:23.828 07:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 3397337 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:24.089 00:11:24.089 real 0m19.574s 00:11:24.089 user 1m17.307s 00:11:24.089 sys 0m1.497s 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.089 ************************************ 00:11:24.089 END TEST nvmf_filesystem_no_in_capsule 00:11:24.089 ************************************ 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.089 ************************************ 00:11:24.089 START TEST nvmf_filesystem_in_capsule 00:11:24.089 ************************************ 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3401503 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3401503 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3401503 ']' 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:24.089 07:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.089 [2024-11-20 07:09:46.322074] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:11:24.089 [2024-11-20 07:09:46.322123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.349 [2024-11-20 07:09:46.414528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.349 [2024-11-20 07:09:46.444279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.349 [2024-11-20 07:09:46.444305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.349 [2024-11-20 07:09:46.444311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.349 [2024-11-20 07:09:46.444315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.349 [2024-11-20 07:09:46.444319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.349 [2024-11-20 07:09:46.445461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.349 [2024-11-20 07:09:46.445613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.349 [2024-11-20 07:09:46.445740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.349 [2024-11-20 07:09:46.445743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.920 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:24.920 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:24.920 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.920 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:24.920 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.920 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.920 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:24.920 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:24.920 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.920 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.920 [2024-11-20 07:09:47.170475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.920 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.920 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:24.920 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.920 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.181 Malloc1 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.181 [2024-11-20 07:09:47.303364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:25.181 { 00:11:25.181 "name": "Malloc1", 00:11:25.181 "aliases": [ 00:11:25.181 "6ff566e9-0989-4573-8667-2a73dc1b02b1" 00:11:25.181 ], 00:11:25.181 "product_name": "Malloc disk", 00:11:25.181 "block_size": 512, 00:11:25.181 "num_blocks": 1048576, 00:11:25.181 "uuid": "6ff566e9-0989-4573-8667-2a73dc1b02b1", 00:11:25.181 "assigned_rate_limits": { 00:11:25.181 "rw_ios_per_sec": 0, 00:11:25.181 "rw_mbytes_per_sec": 0, 00:11:25.181 "r_mbytes_per_sec": 0, 00:11:25.181 "w_mbytes_per_sec": 0 00:11:25.181 }, 00:11:25.181 "claimed": true, 00:11:25.181 "claim_type": "exclusive_write", 00:11:25.181 "zoned": false, 00:11:25.181 "supported_io_types": { 00:11:25.181 "read": true, 00:11:25.181 "write": true, 00:11:25.181 "unmap": true, 00:11:25.181 "flush": true, 00:11:25.181 "reset": true, 00:11:25.181 "nvme_admin": false, 00:11:25.181 "nvme_io": false, 00:11:25.181 "nvme_io_md": false, 00:11:25.181 "write_zeroes": true, 00:11:25.181 "zcopy": true, 00:11:25.181 "get_zone_info": false, 00:11:25.181 "zone_management": false, 00:11:25.181 "zone_append": false, 00:11:25.181 "compare": false, 00:11:25.181 "compare_and_write": false, 00:11:25.181 "abort": true, 00:11:25.181 "seek_hole": false, 00:11:25.181 "seek_data": false, 00:11:25.181 "copy": true, 00:11:25.181 "nvme_iov_md": false 00:11:25.181 }, 00:11:25.181 "memory_domains": [ 00:11:25.181 { 00:11:25.181 "dma_device_id": "system", 00:11:25.181 "dma_device_type": 1 00:11:25.181 }, 00:11:25.181 { 00:11:25.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.181 "dma_device_type": 2 00:11:25.181 } 00:11:25.181 ], 00:11:25.181 "driver_specific": {} 00:11:25.181 } 00:11:25.181 ]' 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:25.181 07:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.104 07:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.104 07:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:27.104 07:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.104 07:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:27.104 07:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:29.018 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:29.279 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:29.852 07:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.792 ************************************ 00:11:30.792 START TEST filesystem_in_capsule_ext4 00:11:30.792 ************************************ 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:30.792 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:30.792 mke2fs 1.47.0 (5-Feb-2023) 00:11:30.792 Discarding device blocks: 0/522240 done 00:11:30.792 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:30.792 Filesystem UUID: 8c3e6203-c678-4100-8006-22f53cf4fafd 00:11:30.792 Superblock backups stored on blocks: 00:11:30.792 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:30.792 00:11:30.792 Allocating group tables: 0/64 done 00:11:30.792 Writing inode tables: 0/64 done 00:11:32.175 Creating journal (8192 blocks): done 00:11:32.175 Writing superblocks and filesystem accounting information: 0/64 done 00:11:32.175 00:11:32.176 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:32.176 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.462 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.462 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:37.462 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.462 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:37.462 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:37.462 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.462 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3401503 00:11:37.462 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.462 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.462 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.462 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.462 00:11:37.462 real 0m6.785s 00:11:37.462 user 0m0.027s 00:11:37.462 sys 0m0.077s 00:11:37.462 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:37.462 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:37.462 ************************************ 00:11:37.462 END TEST filesystem_in_capsule_ext4 00:11:37.462 ************************************ 00:11:37.722 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:37.722 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:37.722 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:37.722 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.722 ************************************ 00:11:37.722 START TEST filesystem_in_capsule_btrfs 00:11:37.722 ************************************ 00:11:37.722 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:37.722 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:37.723 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.723 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:37.723 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:37.723 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:37.723 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:37.723 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:37.723 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:37.723 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:37.723 07:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:37.983 btrfs-progs v6.8.1 00:11:37.983 See https://btrfs.readthedocs.io for more information. 00:11:37.983 00:11:37.983 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:37.983 NOTE: several default settings have changed in version 5.15, please make sure 00:11:37.983 this does not affect your deployments: 00:11:37.983 - DUP for metadata (-m dup) 00:11:37.983 - enabled no-holes (-O no-holes) 00:11:37.983 - enabled free-space-tree (-R free-space-tree) 00:11:37.983 00:11:37.983 Label: (null) 00:11:37.983 UUID: 76b96b21-0e9a-46d9-b522-fc311c0517e5 00:11:37.983 Node size: 16384 00:11:37.983 Sector size: 4096 (CPU page size: 4096) 00:11:37.983 Filesystem size: 510.00MiB 00:11:37.983 Block group profiles: 00:11:37.983 Data: single 8.00MiB 00:11:37.983 Metadata: DUP 32.00MiB 00:11:37.983 System: DUP 8.00MiB 00:11:37.983 SSD detected: yes 00:11:37.983 Zoned device: no 00:11:37.983 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:37.983 Checksum: crc32c 00:11:37.983 Number of devices: 1 00:11:37.983 Devices: 00:11:37.983 ID SIZE PATH 00:11:37.983 1 510.00MiB /dev/nvme0n1p1 00:11:37.983 00:11:37.983 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:37.983 07:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3401503 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:39.366 00:11:39.366 real 0m1.476s 00:11:39.366 user 0m0.029s 00:11:39.366 sys 0m0.121s 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:39.366 ************************************ 00:11:39.366 END TEST filesystem_in_capsule_btrfs 00:11:39.366 ************************************ 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.366 ************************************ 00:11:39.366 START TEST filesystem_in_capsule_xfs 00:11:39.366 ************************************ 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:39.366 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:39.366 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:39.366 = sectsz=512 attr=2, projid32bit=1 00:11:39.366 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:39.366 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:39.366 data = bsize=4096 blocks=130560, imaxpct=25 00:11:39.366 = sunit=0 swidth=0 blks 00:11:39.366 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:39.366 log =internal log bsize=4096 blocks=16384, version=2 00:11:39.366 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:39.366 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:40.307 Discarding blocks...Done. 00:11:40.307 07:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:40.307 07:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:42.222 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:42.222 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:42.222 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:42.222 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:42.222 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:42.222 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:42.222 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3401503 00:11:42.223 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:42.223 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:42.223 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:42.223 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:42.482 00:11:42.482 real 0m3.135s 00:11:42.482 user 0m0.030s 00:11:42.482 sys 0m0.074s 00:11:42.482 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:42.482 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:42.482 ************************************ 00:11:42.482 END TEST filesystem_in_capsule_xfs 00:11:42.482 ************************************ 00:11:42.482 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:42.482 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:42.483 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.483 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.483 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:42.483 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:42.483 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.483 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:42.483 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3401503 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3401503 ']' 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3401503 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3401503 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3401503' 00:11:42.744 killing process with pid 3401503 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 3401503 00:11:42.744 07:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 3401503 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:43.005 00:11:43.005 real 0m18.782s 00:11:43.005 user 1m14.330s 00:11:43.005 sys 0m1.386s 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.005 ************************************ 00:11:43.005 END TEST nvmf_filesystem_in_capsule 00:11:43.005 ************************************ 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.005 rmmod nvme_tcp 00:11:43.005 rmmod nvme_fabrics 00:11:43.005 rmmod nvme_keyring 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.005 07:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:45.552 00:11:45.552 real 0m48.738s 00:11:45.552 user 2m34.044s 00:11:45.552 sys 0m8.826s 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.552 ************************************ 00:11:45.552 END TEST nvmf_filesystem 00:11:45.552 ************************************ 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:45.552 ************************************ 00:11:45.552 START TEST nvmf_target_discovery 00:11:45.552 ************************************ 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:45.552 * Looking for test storage... 00:11:45.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:45.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.552 --rc genhtml_branch_coverage=1 00:11:45.552 --rc genhtml_function_coverage=1 00:11:45.552 --rc genhtml_legend=1 00:11:45.552 --rc geninfo_all_blocks=1 00:11:45.552 --rc geninfo_unexecuted_blocks=1 00:11:45.552 00:11:45.552 ' 00:11:45.552 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:45.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.552 --rc genhtml_branch_coverage=1 00:11:45.552 --rc genhtml_function_coverage=1 00:11:45.552 --rc genhtml_legend=1 00:11:45.553 --rc geninfo_all_blocks=1 00:11:45.553 --rc geninfo_unexecuted_blocks=1 00:11:45.553 00:11:45.553 ' 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:45.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.553 --rc genhtml_branch_coverage=1 00:11:45.553 --rc genhtml_function_coverage=1 00:11:45.553 --rc genhtml_legend=1 00:11:45.553 --rc geninfo_all_blocks=1 00:11:45.553 --rc geninfo_unexecuted_blocks=1 00:11:45.553 00:11:45.553 ' 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:45.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.553 --rc genhtml_branch_coverage=1 00:11:45.553 --rc genhtml_function_coverage=1 00:11:45.553 --rc genhtml_legend=1 00:11:45.553 --rc geninfo_all_blocks=1 00:11:45.553 --rc geninfo_unexecuted_blocks=1 00:11:45.553 00:11:45.553 ' 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.553 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.699 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:53.700 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:53.700 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:53.700 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:53.700 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.700 07:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:11:53.700 00:11:53.700 --- 10.0.0.2 ping statistics --- 00:11:53.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.700 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:11:53.700 00:11:53.700 --- 10.0.0.1 ping statistics --- 00:11:53.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.700 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3409484 00:11:53.700 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3409484 00:11:53.701 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.701 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 3409484 ']' 00:11:53.701 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.701 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:53.701 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.701 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:53.701 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.701 [2024-11-20 07:10:15.199677] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:11:53.701 [2024-11-20 07:10:15.199747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.701 [2024-11-20 07:10:15.298004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.701 [2024-11-20 07:10:15.352501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.701 [2024-11-20 07:10:15.352552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.701 [2024-11-20 07:10:15.352561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.701 [2024-11-20 07:10:15.352573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.701 [2024-11-20 07:10:15.352579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.701 [2024-11-20 07:10:15.354599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.701 [2024-11-20 07:10:15.354758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.701 [2024-11-20 07:10:15.354918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.701 [2024-11-20 07:10:15.354918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.962 [2024-11-20 07:10:16.078917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.962 Null1 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.962 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.963 [2024-11-20 07:10:16.139457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.963 Null2 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.963 Null3 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.963 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.225 Null4 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.225 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:54.225 00:11:54.225 Discovery Log Number of Records 6, Generation counter 6 00:11:54.225 =====Discovery Log Entry 0====== 00:11:54.225 trtype: tcp 00:11:54.225 adrfam: ipv4 00:11:54.225 subtype: current discovery subsystem 00:11:54.225 treq: not required 00:11:54.225 portid: 0 00:11:54.225 trsvcid: 4420 00:11:54.225 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:54.225 traddr: 10.0.0.2 00:11:54.225 eflags: explicit discovery connections, duplicate discovery information 00:11:54.225 sectype: none 00:11:54.225 =====Discovery Log Entry 1====== 00:11:54.225 trtype: tcp 00:11:54.225 adrfam: ipv4 00:11:54.225 subtype: nvme subsystem 00:11:54.225 treq: not required 00:11:54.225 portid: 0 00:11:54.225 trsvcid: 4420 00:11:54.225 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:54.225 traddr: 10.0.0.2 00:11:54.225 eflags: none 00:11:54.225 sectype: none 00:11:54.225 =====Discovery Log Entry 2====== 00:11:54.225 trtype: tcp 00:11:54.225 adrfam: ipv4 00:11:54.225 subtype: nvme subsystem 00:11:54.225 treq: not required 00:11:54.225 portid: 0 00:11:54.225 trsvcid: 4420 00:11:54.225 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:54.225 traddr: 10.0.0.2 00:11:54.225 eflags: none 00:11:54.225 sectype: none 00:11:54.225 =====Discovery Log Entry 3====== 00:11:54.225 trtype: tcp 00:11:54.225 adrfam: ipv4 00:11:54.225 subtype: nvme subsystem 00:11:54.225 treq: not required 00:11:54.225 portid: 0 00:11:54.225 trsvcid: 4420 00:11:54.225 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:54.225 traddr: 10.0.0.2 00:11:54.225 eflags: none 00:11:54.225 sectype: none 00:11:54.225 =====Discovery Log Entry 4====== 00:11:54.225 trtype: tcp 00:11:54.225 adrfam: ipv4 00:11:54.225 subtype: nvme subsystem 00:11:54.225 treq: not required 00:11:54.225 portid: 0 00:11:54.225 trsvcid: 4420 00:11:54.225 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:54.225 traddr: 10.0.0.2 00:11:54.225 eflags: none 00:11:54.225 sectype: none 00:11:54.225 =====Discovery Log Entry 5====== 00:11:54.225 trtype: tcp 00:11:54.225 adrfam: ipv4 00:11:54.225 subtype: discovery subsystem referral 00:11:54.225 treq: not required 00:11:54.225 portid: 0 00:11:54.225 trsvcid: 4430 00:11:54.225 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:54.225 traddr: 10.0.0.2 00:11:54.225 eflags: none 00:11:54.225 sectype: none 00:11:54.487 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:54.487 Perform nvmf subsystem discovery via RPC 00:11:54.487 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:54.487 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.487 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.487 [ 00:11:54.487 { 00:11:54.487 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:54.487 "subtype": "Discovery", 00:11:54.487 "listen_addresses": [ 00:11:54.487 { 00:11:54.487 "trtype": "TCP", 00:11:54.487 "adrfam": "IPv4", 00:11:54.487 "traddr": "10.0.0.2", 00:11:54.487 "trsvcid": "4420" 00:11:54.487 } 00:11:54.487 ], 00:11:54.487 "allow_any_host": true, 00:11:54.487 "hosts": [] 00:11:54.487 }, 00:11:54.487 { 00:11:54.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.487 "subtype": "NVMe", 00:11:54.487 "listen_addresses": [ 00:11:54.487 { 00:11:54.487 "trtype": "TCP", 00:11:54.487 "adrfam": "IPv4", 00:11:54.487 "traddr": "10.0.0.2", 00:11:54.487 "trsvcid": "4420" 00:11:54.487 } 00:11:54.487 ], 00:11:54.487 "allow_any_host": true, 00:11:54.487 "hosts": [], 00:11:54.487 "serial_number": "SPDK00000000000001", 00:11:54.487 "model_number": "SPDK bdev Controller", 00:11:54.487 "max_namespaces": 32, 00:11:54.487 "min_cntlid": 1, 00:11:54.487 "max_cntlid": 65519, 00:11:54.487 "namespaces": [ 00:11:54.487 { 00:11:54.487 "nsid": 1, 00:11:54.487 "bdev_name": "Null1", 00:11:54.487 "name": "Null1", 00:11:54.487 "nguid": "635550507C584495B8ABB628A9BEB1EE", 00:11:54.487 "uuid": "63555050-7c58-4495-b8ab-b628a9beb1ee" 00:11:54.487 } 00:11:54.488 ] 00:11:54.488 }, 00:11:54.488 { 00:11:54.488 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:54.488 "subtype": "NVMe", 00:11:54.488 "listen_addresses": [ 00:11:54.488 { 00:11:54.488 "trtype": "TCP", 00:11:54.488 "adrfam": "IPv4", 00:11:54.488 "traddr": "10.0.0.2", 00:11:54.488 "trsvcid": "4420" 00:11:54.488 } 00:11:54.488 ], 00:11:54.488 "allow_any_host": true, 00:11:54.488 "hosts": [], 00:11:54.488 "serial_number": "SPDK00000000000002", 00:11:54.488 "model_number": "SPDK bdev Controller", 00:11:54.488 "max_namespaces": 32, 00:11:54.488 "min_cntlid": 1, 00:11:54.488 "max_cntlid": 65519, 00:11:54.488 "namespaces": [ 00:11:54.488 { 00:11:54.488 "nsid": 1, 00:11:54.488 "bdev_name": "Null2", 00:11:54.488 "name": "Null2", 00:11:54.488 "nguid": "A6D51F15BD104F96AAFBA6C7E459FA06", 00:11:54.488 "uuid": "a6d51f15-bd10-4f96-aafb-a6c7e459fa06" 00:11:54.488 } 00:11:54.488 ] 00:11:54.488 }, 00:11:54.488 { 00:11:54.488 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:54.488 "subtype": "NVMe", 00:11:54.488 "listen_addresses": [ 00:11:54.488 { 00:11:54.488 "trtype": "TCP", 00:11:54.488 "adrfam": "IPv4", 00:11:54.488 "traddr": "10.0.0.2", 00:11:54.488 "trsvcid": "4420" 00:11:54.488 } 00:11:54.488 ], 00:11:54.488 "allow_any_host": true, 00:11:54.488 "hosts": [], 00:11:54.488 "serial_number": "SPDK00000000000003", 00:11:54.488 "model_number": "SPDK bdev Controller", 00:11:54.488 "max_namespaces": 32, 00:11:54.488 "min_cntlid": 1, 00:11:54.488 "max_cntlid": 65519, 00:11:54.488 "namespaces": [ 00:11:54.488 { 00:11:54.488 "nsid": 1, 00:11:54.488 "bdev_name": "Null3", 00:11:54.488 "name": "Null3", 00:11:54.488 "nguid": "5B76369CA52944ABB0562A08F5DAE15A", 00:11:54.488 "uuid": "5b76369c-a529-44ab-b056-2a08f5dae15a" 00:11:54.488 } 00:11:54.488 ] 00:11:54.488 }, 00:11:54.488 { 00:11:54.488 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:54.488 "subtype": "NVMe", 00:11:54.488 "listen_addresses": [ 00:11:54.488 { 00:11:54.488 "trtype": "TCP", 00:11:54.488 "adrfam": "IPv4", 00:11:54.488 "traddr": "10.0.0.2", 00:11:54.488 "trsvcid": "4420" 00:11:54.488 } 00:11:54.488 ], 00:11:54.488 "allow_any_host": true, 00:11:54.488 "hosts": [], 00:11:54.488 "serial_number": "SPDK00000000000004", 00:11:54.488 "model_number": "SPDK bdev Controller", 00:11:54.488 "max_namespaces": 32, 00:11:54.488 "min_cntlid": 1, 00:11:54.488 "max_cntlid": 65519, 00:11:54.488 "namespaces": [ 00:11:54.488 { 00:11:54.488 "nsid": 1, 00:11:54.488 "bdev_name": "Null4", 00:11:54.488 "name": "Null4", 00:11:54.488 "nguid": "D3573211C60A4423ABD4FCB8A09F3BA2", 00:11:54.488 "uuid": "d3573211-c60a-4423-abd4-fcb8a09f3ba2" 00:11:54.488 } 00:11:54.488 ] 00:11:54.488 } 00:11:54.488 ] 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.488 rmmod nvme_tcp 00:11:54.488 rmmod nvme_fabrics 00:11:54.488 rmmod nvme_keyring 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3409484 ']' 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3409484 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 3409484 ']' 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 3409484 00:11:54.488 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3409484 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3409484' 00:11:54.750 killing process with pid 3409484 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 3409484 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 3409484 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.750 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.298 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:57.298 00:11:57.298 real 0m11.744s 00:11:57.298 user 0m8.894s 00:11:57.298 sys 0m6.193s 00:11:57.298 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:57.298 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.299 ************************************ 00:11:57.299 END TEST nvmf_target_discovery 00:11:57.299 ************************************ 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:57.299 ************************************ 00:11:57.299 START TEST nvmf_referrals 00:11:57.299 ************************************ 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:57.299 * Looking for test storage... 00:11:57.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:57.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.299 --rc genhtml_branch_coverage=1 00:11:57.299 --rc genhtml_function_coverage=1 00:11:57.299 --rc genhtml_legend=1 00:11:57.299 --rc geninfo_all_blocks=1 00:11:57.299 --rc geninfo_unexecuted_blocks=1 00:11:57.299 00:11:57.299 ' 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:57.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.299 --rc genhtml_branch_coverage=1 00:11:57.299 --rc genhtml_function_coverage=1 00:11:57.299 --rc genhtml_legend=1 00:11:57.299 --rc geninfo_all_blocks=1 00:11:57.299 --rc geninfo_unexecuted_blocks=1 00:11:57.299 00:11:57.299 ' 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:57.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.299 --rc genhtml_branch_coverage=1 00:11:57.299 --rc genhtml_function_coverage=1 00:11:57.299 --rc genhtml_legend=1 00:11:57.299 --rc geninfo_all_blocks=1 00:11:57.299 --rc geninfo_unexecuted_blocks=1 00:11:57.299 00:11:57.299 ' 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:57.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.299 --rc genhtml_branch_coverage=1 00:11:57.299 --rc genhtml_function_coverage=1 00:11:57.299 --rc genhtml_legend=1 00:11:57.299 --rc geninfo_all_blocks=1 00:11:57.299 --rc geninfo_unexecuted_blocks=1 00:11:57.299 00:11:57.299 ' 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.299 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:57.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:57.300 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:05.444 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:05.444 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:05.444 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:05.444 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.444 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:05.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:12:05.445 00:12:05.445 --- 10.0.0.2 ping statistics --- 00:12:05.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.445 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:12:05.445 00:12:05.445 --- 10.0.0.1 ping statistics --- 00:12:05.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.445 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3414180 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3414180 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 3414180 ']' 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:05.445 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.445 [2024-11-20 07:10:26.997375] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:12:05.445 [2024-11-20 07:10:26.997438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.445 [2024-11-20 07:10:27.095656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.445 [2024-11-20 07:10:27.149071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.445 [2024-11-20 07:10:27.149125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.445 [2024-11-20 07:10:27.149134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.445 [2024-11-20 07:10:27.149141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.445 [2024-11-20 07:10:27.149147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.445 [2024-11-20 07:10:27.151155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.445 [2024-11-20 07:10:27.151323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.445 [2024-11-20 07:10:27.151580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.445 [2024-11-20 07:10:27.151583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.705 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:05.705 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:12:05.705 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:05.705 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:05.705 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.705 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.705 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.705 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.705 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.705 [2024-11-20 07:10:27.875603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.706 [2024-11-20 07:10:27.891996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.706 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.966 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:05.966 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:05.966 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:05.966 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:05.966 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:05.966 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.966 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.966 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.966 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:06.227 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:06.488 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:06.488 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:06.488 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:06.488 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.488 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.488 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.488 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:06.488 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.488 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:06.489 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:06.750 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:06.750 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:06.750 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:06.750 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:06.750 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:06.750 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.750 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:06.750 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:06.750 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:06.750 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:06.750 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:06.750 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.750 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:07.011 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:07.271 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:07.271 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:07.271 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:07.271 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:07.271 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:07.271 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.271 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:07.532 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:07.796 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:07.796 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:07.796 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:07.796 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:07.796 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:07.796 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:07.796 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.796 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:07.796 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.796 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.796 rmmod nvme_tcp 00:12:07.796 rmmod nvme_fabrics 00:12:07.796 rmmod nvme_keyring 00:12:07.796 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3414180 ']' 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3414180 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 3414180 ']' 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 3414180 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3414180 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3414180' 00:12:08.166 killing process with pid 3414180 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 3414180 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 3414180 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.166 07:10:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.160 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:10.160 00:12:10.160 real 0m13.198s 00:12:10.160 user 0m15.532s 00:12:10.160 sys 0m6.623s 00:12:10.160 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:10.160 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.160 ************************************ 00:12:10.160 END TEST nvmf_referrals 00:12:10.160 ************************************ 00:12:10.160 07:10:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:10.160 07:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:10.160 07:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:10.160 07:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:10.420 ************************************ 00:12:10.420 START TEST nvmf_connect_disconnect 00:12:10.420 ************************************ 00:12:10.420 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:10.420 * Looking for test storage... 00:12:10.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.420 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:10.420 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:12:10.420 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:10.420 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:10.420 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.420 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:10.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.421 --rc genhtml_branch_coverage=1 00:12:10.421 --rc genhtml_function_coverage=1 00:12:10.421 --rc genhtml_legend=1 00:12:10.421 --rc geninfo_all_blocks=1 00:12:10.421 --rc geninfo_unexecuted_blocks=1 00:12:10.421 00:12:10.421 ' 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:10.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.421 --rc genhtml_branch_coverage=1 00:12:10.421 --rc genhtml_function_coverage=1 00:12:10.421 --rc genhtml_legend=1 00:12:10.421 --rc geninfo_all_blocks=1 00:12:10.421 --rc geninfo_unexecuted_blocks=1 00:12:10.421 00:12:10.421 ' 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:10.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.421 --rc genhtml_branch_coverage=1 00:12:10.421 --rc genhtml_function_coverage=1 00:12:10.421 --rc genhtml_legend=1 00:12:10.421 --rc geninfo_all_blocks=1 00:12:10.421 --rc geninfo_unexecuted_blocks=1 00:12:10.421 00:12:10.421 ' 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:10.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.421 --rc genhtml_branch_coverage=1 00:12:10.421 --rc genhtml_function_coverage=1 00:12:10.421 --rc genhtml_legend=1 00:12:10.421 --rc geninfo_all_blocks=1 00:12:10.421 --rc geninfo_unexecuted_blocks=1 00:12:10.421 00:12:10.421 ' 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:10.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:10.421 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:18.558 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:18.558 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:18.558 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:18.558 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:18.558 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:18.559 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.559 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.559 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:18.559 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:18.559 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.559 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:18.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:12:18.559 00:12:18.559 --- 10.0.0.2 ping statistics --- 00:12:18.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.559 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:12:18.559 00:12:18.559 --- 10.0.0.1 ping statistics --- 00:12:18.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.559 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3418957 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3418957 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 3418957 ']' 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:18.559 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.559 [2024-11-20 07:10:40.284291] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:12:18.559 [2024-11-20 07:10:40.284360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.559 [2024-11-20 07:10:40.384794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.559 [2024-11-20 07:10:40.440010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.559 [2024-11-20 07:10:40.440061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.559 [2024-11-20 07:10:40.440070] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.559 [2024-11-20 07:10:40.440077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.559 [2024-11-20 07:10:40.440084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.559 [2024-11-20 07:10:40.442047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.559 [2024-11-20 07:10:40.442220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.559 [2024-11-20 07:10:40.442309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.559 [2024-11-20 07:10:40.442311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.130 [2024-11-20 07:10:41.163585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.130 [2024-11-20 07:10:41.242906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:19.130 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:23.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.414 rmmod nvme_tcp 00:12:37.414 rmmod nvme_fabrics 00:12:37.414 rmmod nvme_keyring 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3418957 ']' 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3418957 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3418957 ']' 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 3418957 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3418957 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3418957' 00:12:37.414 killing process with pid 3418957 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 3418957 00:12:37.414 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 3418957 00:12:37.673 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.673 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.673 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.673 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:37.673 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.673 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:37.673 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.673 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.673 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.673 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.673 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.673 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.215 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:40.215 00:12:40.215 real 0m29.438s 00:12:40.215 user 1m19.273s 00:12:40.215 sys 0m7.185s 00:12:40.215 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:40.215 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.215 ************************************ 00:12:40.215 END TEST nvmf_connect_disconnect 00:12:40.215 ************************************ 00:12:40.215 07:11:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:40.215 07:11:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:40.215 07:11:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:40.215 07:11:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:40.215 ************************************ 00:12:40.215 START TEST nvmf_multitarget 00:12:40.215 ************************************ 00:12:40.215 07:11:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:40.215 * Looking for test storage... 00:12:40.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:40.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.215 --rc genhtml_branch_coverage=1 00:12:40.215 --rc genhtml_function_coverage=1 00:12:40.215 --rc genhtml_legend=1 00:12:40.215 --rc geninfo_all_blocks=1 00:12:40.215 --rc geninfo_unexecuted_blocks=1 00:12:40.215 00:12:40.215 ' 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:40.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.215 --rc genhtml_branch_coverage=1 00:12:40.215 --rc genhtml_function_coverage=1 00:12:40.215 --rc genhtml_legend=1 00:12:40.215 --rc geninfo_all_blocks=1 00:12:40.215 --rc geninfo_unexecuted_blocks=1 00:12:40.215 00:12:40.215 ' 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:40.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.215 --rc genhtml_branch_coverage=1 00:12:40.215 --rc genhtml_function_coverage=1 00:12:40.215 --rc genhtml_legend=1 00:12:40.215 --rc geninfo_all_blocks=1 00:12:40.215 --rc geninfo_unexecuted_blocks=1 00:12:40.215 00:12:40.215 ' 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:40.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.215 --rc genhtml_branch_coverage=1 00:12:40.215 --rc genhtml_function_coverage=1 00:12:40.215 --rc genhtml_legend=1 00:12:40.215 --rc geninfo_all_blocks=1 00:12:40.215 --rc geninfo_unexecuted_blocks=1 00:12:40.215 00:12:40.215 ' 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.215 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:40.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:40.216 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.353 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.353 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:48.353 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:48.353 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:48.353 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:48.353 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:48.353 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:48.354 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:48.354 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:48.354 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:48.354 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.354 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:48.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:12:48.355 00:12:48.355 --- 10.0.0.2 ping statistics --- 00:12:48.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.355 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:12:48.355 00:12:48.355 --- 10.0.0.1 ping statistics --- 00:12:48.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.355 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3427085 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3427085 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 3427085 ']' 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:48.355 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.355 [2024-11-20 07:11:09.798389] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:12:48.355 [2024-11-20 07:11:09.798456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.355 [2024-11-20 07:11:09.898029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.355 [2024-11-20 07:11:09.950680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.355 [2024-11-20 07:11:09.950732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.355 [2024-11-20 07:11:09.950741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.355 [2024-11-20 07:11:09.950748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.355 [2024-11-20 07:11:09.950754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.355 [2024-11-20 07:11:09.952796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.355 [2024-11-20 07:11:09.952960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.355 [2024-11-20 07:11:09.953121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.355 [2024-11-20 07:11:09.953121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.618 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:48.618 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:12:48.618 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:48.618 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:48.618 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.618 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.618 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:48.618 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:48.618 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:48.618 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:48.618 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:48.618 "nvmf_tgt_1" 00:12:48.878 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:48.878 "nvmf_tgt_2" 00:12:48.878 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:48.878 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:48.878 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:48.878 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:49.138 true 00:12:49.138 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:49.138 true 00:12:49.138 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:49.138 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.398 rmmod nvme_tcp 00:12:49.398 rmmod nvme_fabrics 00:12:49.398 rmmod nvme_keyring 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3427085 ']' 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3427085 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 3427085 ']' 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 3427085 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3427085 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3427085' 00:12:49.398 killing process with pid 3427085 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 3427085 00:12:49.398 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 3427085 00:12:49.659 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:49.659 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:49.659 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:49.659 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:49.659 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:49.659 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:49.659 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:49.659 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.659 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:49.659 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.659 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.659 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.202 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:52.202 00:12:52.202 real 0m11.911s 00:12:52.202 user 0m10.328s 00:12:52.202 sys 0m6.245s 00:12:52.202 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.202 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 ************************************ 00:12:52.202 END TEST nvmf_multitarget 00:12:52.202 ************************************ 00:12:52.202 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:52.202 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:52.202 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.202 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 ************************************ 00:12:52.202 START TEST nvmf_rpc 00:12:52.202 ************************************ 00:12:52.202 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:52.202 * Looking for test storage... 00:12:52.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.202 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:52.202 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:52.202 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:52.202 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:52.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.203 --rc genhtml_branch_coverage=1 00:12:52.203 --rc genhtml_function_coverage=1 00:12:52.203 --rc genhtml_legend=1 00:12:52.203 --rc geninfo_all_blocks=1 00:12:52.203 --rc geninfo_unexecuted_blocks=1 00:12:52.203 00:12:52.203 ' 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:52.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.203 --rc genhtml_branch_coverage=1 00:12:52.203 --rc genhtml_function_coverage=1 00:12:52.203 --rc genhtml_legend=1 00:12:52.203 --rc geninfo_all_blocks=1 00:12:52.203 --rc geninfo_unexecuted_blocks=1 00:12:52.203 00:12:52.203 ' 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:52.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.203 --rc genhtml_branch_coverage=1 00:12:52.203 --rc genhtml_function_coverage=1 00:12:52.203 --rc genhtml_legend=1 00:12:52.203 --rc geninfo_all_blocks=1 00:12:52.203 --rc geninfo_unexecuted_blocks=1 00:12:52.203 00:12:52.203 ' 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:52.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.203 --rc genhtml_branch_coverage=1 00:12:52.203 --rc genhtml_function_coverage=1 00:12:52.203 --rc genhtml_legend=1 00:12:52.203 --rc geninfo_all_blocks=1 00:12:52.203 --rc geninfo_unexecuted_blocks=1 00:12:52.203 00:12:52.203 ' 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:52.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.203 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.204 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:52.204 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:52.204 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:52.204 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.338 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:00.339 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:00.339 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:00.339 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:00.339 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:00.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:13:00.339 00:13:00.339 --- 10.0.0.2 ping statistics --- 00:13:00.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.339 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:13:00.339 00:13:00.339 --- 10.0.0.1 ping statistics --- 00:13:00.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.339 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3431781 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3431781 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 3431781 ']' 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.339 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:00.340 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.340 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:00.340 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.340 [2024-11-20 07:11:21.896411] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:13:00.340 [2024-11-20 07:11:21.896473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.340 [2024-11-20 07:11:21.997928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.340 [2024-11-20 07:11:22.051390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.340 [2024-11-20 07:11:22.051444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.340 [2024-11-20 07:11:22.051452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.340 [2024-11-20 07:11:22.051460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.340 [2024-11-20 07:11:22.051466] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.340 [2024-11-20 07:11:22.053552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.340 [2024-11-20 07:11:22.053714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.340 [2024-11-20 07:11:22.053878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.340 [2024-11-20 07:11:22.053878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:00.600 "tick_rate": 2400000000, 00:13:00.600 "poll_groups": [ 00:13:00.600 { 00:13:00.600 "name": "nvmf_tgt_poll_group_000", 00:13:00.600 "admin_qpairs": 0, 00:13:00.600 "io_qpairs": 0, 00:13:00.600 "current_admin_qpairs": 0, 00:13:00.600 "current_io_qpairs": 0, 00:13:00.600 "pending_bdev_io": 0, 00:13:00.600 "completed_nvme_io": 0, 00:13:00.600 "transports": [] 00:13:00.600 }, 00:13:00.600 { 00:13:00.600 "name": "nvmf_tgt_poll_group_001", 00:13:00.600 "admin_qpairs": 0, 00:13:00.600 "io_qpairs": 0, 00:13:00.600 "current_admin_qpairs": 0, 00:13:00.600 "current_io_qpairs": 0, 00:13:00.600 "pending_bdev_io": 0, 00:13:00.600 "completed_nvme_io": 0, 00:13:00.600 "transports": [] 00:13:00.600 }, 00:13:00.600 { 00:13:00.600 "name": "nvmf_tgt_poll_group_002", 00:13:00.600 "admin_qpairs": 0, 00:13:00.600 "io_qpairs": 0, 00:13:00.600 "current_admin_qpairs": 0, 00:13:00.600 "current_io_qpairs": 0, 00:13:00.600 "pending_bdev_io": 0, 00:13:00.600 "completed_nvme_io": 0, 00:13:00.600 "transports": [] 00:13:00.600 }, 00:13:00.600 { 00:13:00.600 "name": "nvmf_tgt_poll_group_003", 00:13:00.600 "admin_qpairs": 0, 00:13:00.600 "io_qpairs": 0, 00:13:00.600 "current_admin_qpairs": 0, 00:13:00.600 "current_io_qpairs": 0, 00:13:00.600 "pending_bdev_io": 0, 00:13:00.600 "completed_nvme_io": 0, 00:13:00.600 "transports": [] 00:13:00.600 } 00:13:00.600 ] 00:13:00.600 }' 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:00.600 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:00.860 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.861 [2024-11-20 07:11:22.891055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:00.861 "tick_rate": 2400000000, 00:13:00.861 "poll_groups": [ 00:13:00.861 { 00:13:00.861 "name": "nvmf_tgt_poll_group_000", 00:13:00.861 "admin_qpairs": 0, 00:13:00.861 "io_qpairs": 0, 00:13:00.861 "current_admin_qpairs": 0, 00:13:00.861 "current_io_qpairs": 0, 00:13:00.861 "pending_bdev_io": 0, 00:13:00.861 "completed_nvme_io": 0, 00:13:00.861 "transports": [ 00:13:00.861 { 00:13:00.861 "trtype": "TCP" 00:13:00.861 } 00:13:00.861 ] 00:13:00.861 }, 00:13:00.861 { 00:13:00.861 "name": "nvmf_tgt_poll_group_001", 00:13:00.861 "admin_qpairs": 0, 00:13:00.861 "io_qpairs": 0, 00:13:00.861 "current_admin_qpairs": 0, 00:13:00.861 "current_io_qpairs": 0, 00:13:00.861 "pending_bdev_io": 0, 00:13:00.861 "completed_nvme_io": 0, 00:13:00.861 "transports": [ 00:13:00.861 { 00:13:00.861 "trtype": "TCP" 00:13:00.861 } 00:13:00.861 ] 00:13:00.861 }, 00:13:00.861 { 00:13:00.861 "name": "nvmf_tgt_poll_group_002", 00:13:00.861 "admin_qpairs": 0, 00:13:00.861 "io_qpairs": 0, 00:13:00.861 "current_admin_qpairs": 0, 00:13:00.861 "current_io_qpairs": 0, 00:13:00.861 "pending_bdev_io": 0, 00:13:00.861 "completed_nvme_io": 0, 00:13:00.861 "transports": [ 00:13:00.861 { 00:13:00.861 "trtype": "TCP" 00:13:00.861 } 00:13:00.861 ] 00:13:00.861 }, 00:13:00.861 { 00:13:00.861 "name": "nvmf_tgt_poll_group_003", 00:13:00.861 "admin_qpairs": 0, 00:13:00.861 "io_qpairs": 0, 00:13:00.861 "current_admin_qpairs": 0, 00:13:00.861 "current_io_qpairs": 0, 00:13:00.861 "pending_bdev_io": 0, 00:13:00.861 "completed_nvme_io": 0, 00:13:00.861 "transports": [ 00:13:00.861 { 00:13:00.861 "trtype": "TCP" 00:13:00.861 } 00:13:00.861 ] 00:13:00.861 } 00:13:00.861 ] 00:13:00.861 }' 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:00.861 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.861 Malloc1 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.861 [2024-11-20 07:11:23.102489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:00.861 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:01.121 [2024-11-20 07:11:23.143422] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:01.121 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:01.122 could not add new controller: failed to write to nvme-fabrics device 00:13:01.122 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:01.122 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:01.122 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:01.122 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:01.122 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:01.122 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.122 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.122 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.122 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.503 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.503 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:02.503 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.503 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:02.503 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.094 [2024-11-20 07:11:26.914180] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:05.094 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:05.094 could not add new controller: failed to write to nvme-fabrics device 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.094 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.473 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.473 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:06.473 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.473 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:06.473 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.384 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.644 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.644 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.644 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.644 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.644 [2024-11-20 07:11:30.667067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.644 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.644 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.644 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.644 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.644 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.644 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.644 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.644 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.644 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.645 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.026 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.026 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:10.026 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.026 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:10.026 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.566 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.567 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.567 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.567 [2024-11-20 07:11:34.467279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.567 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.567 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.567 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.567 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.567 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.567 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.567 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.567 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.567 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.567 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.949 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.949 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:13.949 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.949 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:13.949 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:15.872 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:15.872 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:15.872 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.872 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:15.872 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.872 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:15.872 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.872 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.872 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:15.872 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:15.872 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.872 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:15.872 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.872 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:15.872 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.872 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.872 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.217 [2024-11-20 07:11:38.186901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.217 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.673 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.673 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:17.673 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.673 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:17.673 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:19.585 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:19.585 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:19.585 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.585 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:19.585 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.585 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:19.585 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 [2024-11-20 07:11:41.956074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.847 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.758 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.758 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:21.758 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.758 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:21.758 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.667 [2024-11-20 07:11:45.714329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.667 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.049 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:25.049 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:25.049 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.049 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:25.049 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:26.958 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:26.958 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:26.958 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 [2024-11-20 07:11:49.443167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.218 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 [2024-11-20 07:11:49.503277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 [2024-11-20 07:11:49.571456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.479 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 [2024-11-20 07:11:49.643683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.480 [2024-11-20 07:11:49.711902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.480 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.740 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.740 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.740 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.740 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:27.740 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.740 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.740 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.740 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:27.740 "tick_rate": 2400000000, 00:13:27.740 "poll_groups": [ 00:13:27.740 { 00:13:27.740 "name": "nvmf_tgt_poll_group_000", 00:13:27.740 "admin_qpairs": 0, 00:13:27.740 "io_qpairs": 224, 00:13:27.740 "current_admin_qpairs": 0, 00:13:27.740 "current_io_qpairs": 0, 00:13:27.740 "pending_bdev_io": 0, 00:13:27.740 "completed_nvme_io": 505, 00:13:27.740 "transports": [ 00:13:27.740 { 00:13:27.740 "trtype": "TCP" 00:13:27.740 } 00:13:27.740 ] 00:13:27.740 }, 00:13:27.740 { 00:13:27.740 "name": "nvmf_tgt_poll_group_001", 00:13:27.740 "admin_qpairs": 1, 00:13:27.740 "io_qpairs": 223, 00:13:27.740 "current_admin_qpairs": 0, 00:13:27.740 "current_io_qpairs": 0, 00:13:27.740 "pending_bdev_io": 0, 00:13:27.740 "completed_nvme_io": 292, 00:13:27.741 "transports": [ 00:13:27.741 { 00:13:27.741 "trtype": "TCP" 00:13:27.741 } 00:13:27.741 ] 00:13:27.741 }, 00:13:27.741 { 00:13:27.741 "name": "nvmf_tgt_poll_group_002", 00:13:27.741 "admin_qpairs": 6, 00:13:27.741 "io_qpairs": 218, 00:13:27.741 "current_admin_qpairs": 0, 00:13:27.741 "current_io_qpairs": 0, 00:13:27.741 "pending_bdev_io": 0, 00:13:27.741 "completed_nvme_io": 218, 00:13:27.741 "transports": [ 00:13:27.741 { 00:13:27.741 "trtype": "TCP" 00:13:27.741 } 00:13:27.741 ] 00:13:27.741 }, 00:13:27.741 { 00:13:27.741 "name": "nvmf_tgt_poll_group_003", 00:13:27.741 "admin_qpairs": 0, 00:13:27.741 "io_qpairs": 224, 00:13:27.741 "current_admin_qpairs": 0, 00:13:27.741 "current_io_qpairs": 0, 00:13:27.741 "pending_bdev_io": 0, 00:13:27.741 "completed_nvme_io": 224, 00:13:27.741 "transports": [ 00:13:27.741 { 00:13:27.741 "trtype": "TCP" 00:13:27.741 } 00:13:27.741 ] 00:13:27.741 } 00:13:27.741 ] 00:13:27.741 }' 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:27.741 rmmod nvme_tcp 00:13:27.741 rmmod nvme_fabrics 00:13:27.741 rmmod nvme_keyring 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3431781 ']' 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3431781 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 3431781 ']' 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 3431781 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:27.741 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3431781 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3431781' 00:13:28.001 killing process with pid 3431781 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 3431781 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 3431781 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.001 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.542 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:30.542 00:13:30.542 real 0m38.257s 00:13:30.542 user 1m54.289s 00:13:30.542 sys 0m8.044s 00:13:30.542 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:30.542 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.542 ************************************ 00:13:30.542 END TEST nvmf_rpc 00:13:30.542 ************************************ 00:13:30.542 07:11:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:30.542 07:11:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:30.542 07:11:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:30.542 07:11:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.542 ************************************ 00:13:30.542 START TEST nvmf_invalid 00:13:30.542 ************************************ 00:13:30.542 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:30.542 * Looking for test storage... 00:13:30.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.542 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:30.542 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:30.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.543 --rc genhtml_branch_coverage=1 00:13:30.543 --rc genhtml_function_coverage=1 00:13:30.543 --rc genhtml_legend=1 00:13:30.543 --rc geninfo_all_blocks=1 00:13:30.543 --rc geninfo_unexecuted_blocks=1 00:13:30.543 00:13:30.543 ' 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:30.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.543 --rc genhtml_branch_coverage=1 00:13:30.543 --rc genhtml_function_coverage=1 00:13:30.543 --rc genhtml_legend=1 00:13:30.543 --rc geninfo_all_blocks=1 00:13:30.543 --rc geninfo_unexecuted_blocks=1 00:13:30.543 00:13:30.543 ' 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:30.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.543 --rc genhtml_branch_coverage=1 00:13:30.543 --rc genhtml_function_coverage=1 00:13:30.543 --rc genhtml_legend=1 00:13:30.543 --rc geninfo_all_blocks=1 00:13:30.543 --rc geninfo_unexecuted_blocks=1 00:13:30.543 00:13:30.543 ' 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:30.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.543 --rc genhtml_branch_coverage=1 00:13:30.543 --rc genhtml_function_coverage=1 00:13:30.543 --rc genhtml_legend=1 00:13:30.543 --rc geninfo_all_blocks=1 00:13:30.543 --rc geninfo_unexecuted_blocks=1 00:13:30.543 00:13:30.543 ' 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.543 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:30.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:30.544 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.683 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:38.684 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:38.684 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:38.684 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:38.684 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:38.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:13:38.684 00:13:38.684 --- 10.0.0.2 ping statistics --- 00:13:38.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.684 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:38.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:13:38.684 00:13:38.684 --- 10.0.0.1 ping statistics --- 00:13:38.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.684 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:38.684 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:38.684 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:38.684 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:38.684 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:38.684 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:38.684 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3441654 00:13:38.684 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3441654 00:13:38.684 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:38.684 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 3441654 ']' 00:13:38.684 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.684 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:38.684 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.684 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:38.684 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:38.684 [2024-11-20 07:12:00.113893] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:13:38.685 [2024-11-20 07:12:00.113965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.685 [2024-11-20 07:12:00.213058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:38.685 [2024-11-20 07:12:00.266062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.685 [2024-11-20 07:12:00.266114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.685 [2024-11-20 07:12:00.266124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.685 [2024-11-20 07:12:00.266131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.685 [2024-11-20 07:12:00.266137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.685 [2024-11-20 07:12:00.268213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.685 [2024-11-20 07:12:00.268302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.685 [2024-11-20 07:12:00.268461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.685 [2024-11-20 07:12:00.268462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.685 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:38.685 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:13:38.685 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:38.685 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:38.685 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:38.945 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.945 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:38.945 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18825 00:13:38.945 [2024-11-20 07:12:01.161290] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:38.945 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:38.945 { 00:13:38.945 "nqn": "nqn.2016-06.io.spdk:cnode18825", 00:13:38.945 "tgt_name": "foobar", 00:13:38.945 "method": "nvmf_create_subsystem", 00:13:38.945 "req_id": 1 00:13:38.945 } 00:13:38.945 Got JSON-RPC error response 00:13:38.945 response: 00:13:38.945 { 00:13:38.945 "code": -32603, 00:13:38.945 "message": "Unable to find target foobar" 00:13:38.945 }' 00:13:38.945 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:38.945 { 00:13:38.945 "nqn": "nqn.2016-06.io.spdk:cnode18825", 00:13:38.945 "tgt_name": "foobar", 00:13:38.945 "method": "nvmf_create_subsystem", 00:13:38.945 "req_id": 1 00:13:38.945 } 00:13:38.945 Got JSON-RPC error response 00:13:38.945 response: 00:13:38.945 { 00:13:38.945 "code": -32603, 00:13:38.945 "message": "Unable to find target foobar" 00:13:38.945 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:38.945 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:38.945 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18714 00:13:39.206 [2024-11-20 07:12:01.370152] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18714: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:39.206 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:39.206 { 00:13:39.206 "nqn": "nqn.2016-06.io.spdk:cnode18714", 00:13:39.206 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:39.206 "method": "nvmf_create_subsystem", 00:13:39.206 "req_id": 1 00:13:39.206 } 00:13:39.206 Got JSON-RPC error response 00:13:39.206 response: 00:13:39.206 { 00:13:39.206 "code": -32602, 00:13:39.206 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:39.206 }' 00:13:39.206 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:39.206 { 00:13:39.206 "nqn": "nqn.2016-06.io.spdk:cnode18714", 00:13:39.206 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:39.206 "method": "nvmf_create_subsystem", 00:13:39.206 "req_id": 1 00:13:39.206 } 00:13:39.206 Got JSON-RPC error response 00:13:39.206 response: 00:13:39.206 { 00:13:39.206 "code": -32602, 00:13:39.206 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:39.206 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:39.206 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:39.206 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32666 00:13:39.467 [2024-11-20 07:12:01.562840] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32666: invalid model number 'SPDK_Controller' 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:39.467 { 00:13:39.467 "nqn": "nqn.2016-06.io.spdk:cnode32666", 00:13:39.467 "model_number": "SPDK_Controller\u001f", 00:13:39.467 "method": "nvmf_create_subsystem", 00:13:39.467 "req_id": 1 00:13:39.467 } 00:13:39.467 Got JSON-RPC error response 00:13:39.467 response: 00:13:39.467 { 00:13:39.467 "code": -32602, 00:13:39.467 "message": "Invalid MN SPDK_Controller\u001f" 00:13:39.467 }' 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:39.467 { 00:13:39.467 "nqn": "nqn.2016-06.io.spdk:cnode32666", 00:13:39.467 "model_number": "SPDK_Controller\u001f", 00:13:39.467 "method": "nvmf_create_subsystem", 00:13:39.467 "req_id": 1 00:13:39.467 } 00:13:39.467 Got JSON-RPC error response 00:13:39.467 response: 00:13:39.467 { 00:13:39.467 "code": -32602, 00:13:39.467 "message": "Invalid MN SPDK_Controller\u001f" 00:13:39.467 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:39.467 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.468 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '8, v\ X%|rV#~q"0;zVcS' 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '8, v\ X%|rV#~q"0;zVcS' nqn.2016-06.io.spdk:cnode22188 00:13:39.729 [2024-11-20 07:12:01.940305] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22188: invalid serial number '8, v\ X%|rV#~q"0;zVcS' 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:39.729 { 00:13:39.729 "nqn": "nqn.2016-06.io.spdk:cnode22188", 00:13:39.729 "serial_number": "8, v\\ X%|rV#~q\"0;zVcS", 00:13:39.729 "method": "nvmf_create_subsystem", 00:13:39.729 "req_id": 1 00:13:39.729 } 00:13:39.729 Got JSON-RPC error response 00:13:39.729 response: 00:13:39.729 { 00:13:39.729 "code": -32602, 00:13:39.729 "message": "Invalid SN 8, v\\ X%|rV#~q\"0;zVcS" 00:13:39.729 }' 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:39.729 { 00:13:39.729 "nqn": "nqn.2016-06.io.spdk:cnode22188", 00:13:39.729 "serial_number": "8, v\\ X%|rV#~q\"0;zVcS", 00:13:39.729 "method": "nvmf_create_subsystem", 00:13:39.729 "req_id": 1 00:13:39.729 } 00:13:39.729 Got JSON-RPC error response 00:13:39.729 response: 00:13:39.729 { 00:13:39.729 "code": -32602, 00:13:39.729 "message": "Invalid SN 8, v\\ X%|rV#~q\"0;zVcS" 00:13:39.729 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.729 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:39.992 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:39.993 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ < == \- ]] 00:13:40.255 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '<@!UH,`YhO4^#y\0[iyW#z (YwT]{RO*9$lva:Q,D' 00:13:40.256 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '<@!UH,`YhO4^#y\0[iyW#z (YwT]{RO*9$lva:Q,D' nqn.2016-06.io.spdk:cnode1817 00:13:40.256 [2024-11-20 07:12:02.470343] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1817: invalid model number '<@!UH,`YhO4^#y\0[iyW#z (YwT]{RO*9$lva:Q,D' 00:13:40.256 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:40.256 { 00:13:40.256 "nqn": "nqn.2016-06.io.spdk:cnode1817", 00:13:40.256 "model_number": "<@!UH,`YhO4^#y\\0[iyW#z (YwT]{RO*9$lva:Q,D", 00:13:40.256 "method": "nvmf_create_subsystem", 00:13:40.256 "req_id": 1 00:13:40.256 } 00:13:40.256 Got JSON-RPC error response 00:13:40.256 response: 00:13:40.256 { 00:13:40.256 "code": -32602, 00:13:40.256 "message": "Invalid MN <@!UH,`YhO4^#y\\0[iyW#z (YwT]{RO*9$lva:Q,D" 00:13:40.256 }' 00:13:40.256 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:40.256 { 00:13:40.256 "nqn": "nqn.2016-06.io.spdk:cnode1817", 00:13:40.256 "model_number": "<@!UH,`YhO4^#y\\0[iyW#z (YwT]{RO*9$lva:Q,D", 00:13:40.256 "method": "nvmf_create_subsystem", 00:13:40.256 "req_id": 1 00:13:40.256 } 00:13:40.256 Got JSON-RPC error response 00:13:40.256 response: 00:13:40.256 { 00:13:40.256 "code": -32602, 00:13:40.256 "message": "Invalid MN <@!UH,`YhO4^#y\\0[iyW#z (YwT]{RO*9$lva:Q,D" 00:13:40.256 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:40.256 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:40.516 [2024-11-20 07:12:02.671201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.516 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:40.777 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:40.778 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:40.778 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:40.778 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:40.778 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:41.039 [2024-11-20 07:12:03.080665] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:41.039 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:41.039 { 00:13:41.039 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:41.039 "listen_address": { 00:13:41.039 "trtype": "tcp", 00:13:41.039 "traddr": "", 00:13:41.039 "trsvcid": "4421" 00:13:41.039 }, 00:13:41.039 "method": "nvmf_subsystem_remove_listener", 00:13:41.039 "req_id": 1 00:13:41.039 } 00:13:41.039 Got JSON-RPC error response 00:13:41.039 response: 00:13:41.039 { 00:13:41.039 "code": -32602, 00:13:41.039 "message": "Invalid parameters" 00:13:41.039 }' 00:13:41.039 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:41.039 { 00:13:41.039 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:41.039 "listen_address": { 00:13:41.039 "trtype": "tcp", 00:13:41.039 "traddr": "", 00:13:41.039 "trsvcid": "4421" 00:13:41.039 }, 00:13:41.039 "method": "nvmf_subsystem_remove_listener", 00:13:41.039 "req_id": 1 00:13:41.039 } 00:13:41.039 Got JSON-RPC error response 00:13:41.039 response: 00:13:41.039 { 00:13:41.039 "code": -32602, 00:13:41.039 "message": "Invalid parameters" 00:13:41.039 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:41.039 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4353 -i 0 00:13:41.039 [2024-11-20 07:12:03.281368] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4353: invalid cntlid range [0-65519] 00:13:41.300 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:41.300 { 00:13:41.300 "nqn": "nqn.2016-06.io.spdk:cnode4353", 00:13:41.300 "min_cntlid": 0, 00:13:41.300 "method": "nvmf_create_subsystem", 00:13:41.300 "req_id": 1 00:13:41.300 } 00:13:41.300 Got JSON-RPC error response 00:13:41.300 response: 00:13:41.300 { 00:13:41.300 "code": -32602, 00:13:41.300 "message": "Invalid cntlid range [0-65519]" 00:13:41.300 }' 00:13:41.300 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:41.300 { 00:13:41.300 "nqn": "nqn.2016-06.io.spdk:cnode4353", 00:13:41.300 "min_cntlid": 0, 00:13:41.300 "method": "nvmf_create_subsystem", 00:13:41.300 "req_id": 1 00:13:41.300 } 00:13:41.300 Got JSON-RPC error response 00:13:41.300 response: 00:13:41.300 { 00:13:41.300 "code": -32602, 00:13:41.300 "message": "Invalid cntlid range [0-65519]" 00:13:41.300 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:41.300 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3844 -i 65520 00:13:41.300 [2024-11-20 07:12:03.486237] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3844: invalid cntlid range [65520-65519] 00:13:41.300 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:41.300 { 00:13:41.300 "nqn": "nqn.2016-06.io.spdk:cnode3844", 00:13:41.300 "min_cntlid": 65520, 00:13:41.300 "method": "nvmf_create_subsystem", 00:13:41.300 "req_id": 1 00:13:41.300 } 00:13:41.300 Got JSON-RPC error response 00:13:41.300 response: 00:13:41.300 { 00:13:41.300 "code": -32602, 00:13:41.300 "message": "Invalid cntlid range [65520-65519]" 00:13:41.300 }' 00:13:41.300 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:41.300 { 00:13:41.300 "nqn": "nqn.2016-06.io.spdk:cnode3844", 00:13:41.300 "min_cntlid": 65520, 00:13:41.300 "method": "nvmf_create_subsystem", 00:13:41.300 "req_id": 1 00:13:41.300 } 00:13:41.300 Got JSON-RPC error response 00:13:41.300 response: 00:13:41.300 { 00:13:41.300 "code": -32602, 00:13:41.300 "message": "Invalid cntlid range [65520-65519]" 00:13:41.300 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:41.300 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12264 -I 0 00:13:41.560 [2024-11-20 07:12:03.678822] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12264: invalid cntlid range [1-0] 00:13:41.560 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:41.560 { 00:13:41.560 "nqn": "nqn.2016-06.io.spdk:cnode12264", 00:13:41.560 "max_cntlid": 0, 00:13:41.560 "method": "nvmf_create_subsystem", 00:13:41.560 "req_id": 1 00:13:41.560 } 00:13:41.560 Got JSON-RPC error response 00:13:41.560 response: 00:13:41.560 { 00:13:41.560 "code": -32602, 00:13:41.560 "message": "Invalid cntlid range [1-0]" 00:13:41.560 }' 00:13:41.560 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:41.560 { 00:13:41.560 "nqn": "nqn.2016-06.io.spdk:cnode12264", 00:13:41.560 "max_cntlid": 0, 00:13:41.560 "method": "nvmf_create_subsystem", 00:13:41.560 "req_id": 1 00:13:41.560 } 00:13:41.560 Got JSON-RPC error response 00:13:41.560 response: 00:13:41.560 { 00:13:41.560 "code": -32602, 00:13:41.560 "message": "Invalid cntlid range [1-0]" 00:13:41.560 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:41.560 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3000 -I 65520 00:13:41.820 [2024-11-20 07:12:03.867416] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3000: invalid cntlid range [1-65520] 00:13:41.820 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:41.820 { 00:13:41.820 "nqn": "nqn.2016-06.io.spdk:cnode3000", 00:13:41.820 "max_cntlid": 65520, 00:13:41.820 "method": "nvmf_create_subsystem", 00:13:41.820 "req_id": 1 00:13:41.820 } 00:13:41.820 Got JSON-RPC error response 00:13:41.820 response: 00:13:41.820 { 00:13:41.820 "code": -32602, 00:13:41.820 "message": "Invalid cntlid range [1-65520]" 00:13:41.820 }' 00:13:41.820 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:41.820 { 00:13:41.820 "nqn": "nqn.2016-06.io.spdk:cnode3000", 00:13:41.820 "max_cntlid": 65520, 00:13:41.820 "method": "nvmf_create_subsystem", 00:13:41.820 "req_id": 1 00:13:41.820 } 00:13:41.820 Got JSON-RPC error response 00:13:41.820 response: 00:13:41.820 { 00:13:41.820 "code": -32602, 00:13:41.820 "message": "Invalid cntlid range [1-65520]" 00:13:41.820 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:41.820 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20721 -i 6 -I 5 00:13:41.820 [2024-11-20 07:12:04.047980] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20721: invalid cntlid range [6-5] 00:13:41.820 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:41.820 { 00:13:41.820 "nqn": "nqn.2016-06.io.spdk:cnode20721", 00:13:41.820 "min_cntlid": 6, 00:13:41.820 "max_cntlid": 5, 00:13:41.820 "method": "nvmf_create_subsystem", 00:13:41.820 "req_id": 1 00:13:41.820 } 00:13:41.820 Got JSON-RPC error response 00:13:41.820 response: 00:13:41.820 { 00:13:41.820 "code": -32602, 00:13:41.820 "message": "Invalid cntlid range [6-5]" 00:13:41.820 }' 00:13:41.821 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:41.821 { 00:13:41.821 "nqn": "nqn.2016-06.io.spdk:cnode20721", 00:13:41.821 "min_cntlid": 6, 00:13:41.821 "max_cntlid": 5, 00:13:41.821 "method": "nvmf_create_subsystem", 00:13:41.821 "req_id": 1 00:13:41.821 } 00:13:41.821 Got JSON-RPC error response 00:13:41.821 response: 00:13:41.821 { 00:13:41.821 "code": -32602, 00:13:41.821 "message": "Invalid cntlid range [6-5]" 00:13:41.821 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:41.821 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:42.081 { 00:13:42.081 "name": "foobar", 00:13:42.081 "method": "nvmf_delete_target", 00:13:42.081 "req_id": 1 00:13:42.081 } 00:13:42.081 Got JSON-RPC error response 00:13:42.081 response: 00:13:42.081 { 00:13:42.081 "code": -32602, 00:13:42.081 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:42.081 }' 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:42.081 { 00:13:42.081 "name": "foobar", 00:13:42.081 "method": "nvmf_delete_target", 00:13:42.081 "req_id": 1 00:13:42.081 } 00:13:42.081 Got JSON-RPC error response 00:13:42.081 response: 00:13:42.081 { 00:13:42.081 "code": -32602, 00:13:42.081 "message": "The specified target doesn't exist, cannot delete it." 00:13:42.081 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:42.081 rmmod nvme_tcp 00:13:42.081 rmmod nvme_fabrics 00:13:42.081 rmmod nvme_keyring 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3441654 ']' 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3441654 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 3441654 ']' 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 3441654 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3441654 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3441654' 00:13:42.081 killing process with pid 3441654 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 3441654 00:13:42.081 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 3441654 00:13:42.340 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:42.340 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:42.340 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:42.340 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:42.340 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:42.340 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:42.340 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:42.340 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:42.340 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:42.340 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.340 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.340 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.250 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:44.250 00:13:44.250 real 0m14.212s 00:13:44.250 user 0m21.297s 00:13:44.250 sys 0m6.717s 00:13:44.250 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:44.250 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:44.250 ************************************ 00:13:44.250 END TEST nvmf_invalid 00:13:44.250 ************************************ 00:13:44.510 07:12:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:44.510 07:12:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:44.510 07:12:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:44.510 07:12:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:44.510 ************************************ 00:13:44.510 START TEST nvmf_connect_stress 00:13:44.510 ************************************ 00:13:44.510 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:44.510 * Looking for test storage... 00:13:44.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.510 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:44.510 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:44.510 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:44.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.771 --rc genhtml_branch_coverage=1 00:13:44.771 --rc genhtml_function_coverage=1 00:13:44.771 --rc genhtml_legend=1 00:13:44.771 --rc geninfo_all_blocks=1 00:13:44.771 --rc geninfo_unexecuted_blocks=1 00:13:44.771 00:13:44.771 ' 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:44.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.771 --rc genhtml_branch_coverage=1 00:13:44.771 --rc genhtml_function_coverage=1 00:13:44.771 --rc genhtml_legend=1 00:13:44.771 --rc geninfo_all_blocks=1 00:13:44.771 --rc geninfo_unexecuted_blocks=1 00:13:44.771 00:13:44.771 ' 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:44.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.771 --rc genhtml_branch_coverage=1 00:13:44.771 --rc genhtml_function_coverage=1 00:13:44.771 --rc genhtml_legend=1 00:13:44.771 --rc geninfo_all_blocks=1 00:13:44.771 --rc geninfo_unexecuted_blocks=1 00:13:44.771 00:13:44.771 ' 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:44.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.771 --rc genhtml_branch_coverage=1 00:13:44.771 --rc genhtml_function_coverage=1 00:13:44.771 --rc genhtml_legend=1 00:13:44.771 --rc geninfo_all_blocks=1 00:13:44.771 --rc geninfo_unexecuted_blocks=1 00:13:44.771 00:13:44.771 ' 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.771 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:44.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:44.772 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:52.907 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:52.907 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:52.907 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.907 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:52.908 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:52.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:13:52.908 00:13:52.908 --- 10.0.0.2 ping statistics --- 00:13:52.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.908 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:13:52.908 00:13:52.908 --- 10.0.0.1 ping statistics --- 00:13:52.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.908 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3447391 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3447391 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 3447391 ']' 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:52.908 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.908 [2024-11-20 07:12:14.473115] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:13:52.908 [2024-11-20 07:12:14.473186] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.908 [2024-11-20 07:12:14.573913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:52.908 [2024-11-20 07:12:14.624922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.909 [2024-11-20 07:12:14.624975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.909 [2024-11-20 07:12:14.624985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.909 [2024-11-20 07:12:14.624992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.909 [2024-11-20 07:12:14.624999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.909 [2024-11-20 07:12:14.626883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.909 [2024-11-20 07:12:14.627044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.909 [2024-11-20 07:12:14.627045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.170 [2024-11-20 07:12:15.358275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.170 [2024-11-20 07:12:15.384012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.170 NULL1 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3447475 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.170 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.171 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.171 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.171 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.171 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.171 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.171 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.171 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.171 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.171 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.432 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.693 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.693 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:53.693 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.693 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.693 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.953 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.953 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:53.953 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.953 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.953 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.523 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.523 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:54.523 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.523 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.523 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.784 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.784 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:54.784 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.784 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.784 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.045 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.045 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:55.045 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.045 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.045 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.304 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.304 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:55.304 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.305 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.305 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.564 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.564 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:55.564 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.564 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.564 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.134 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.134 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:56.134 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.134 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.134 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.394 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.394 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:56.394 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.394 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.394 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.654 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.654 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:56.654 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.654 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.654 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.914 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.914 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:56.914 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.914 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.914 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.173 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.173 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:57.173 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.173 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.173 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.743 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.743 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:57.743 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.743 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.743 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.002 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.002 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:58.002 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.002 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.002 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.262 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.262 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:58.262 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.262 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.262 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.521 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.521 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:58.521 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.522 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.522 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.781 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.781 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:58.781 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.781 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.781 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.350 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.350 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:59.350 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.350 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.350 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.611 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.611 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:59.611 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.611 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.611 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.871 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.871 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:13:59.871 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.871 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.871 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:14:00.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.701 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.701 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:14:00.701 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.701 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.701 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.961 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.961 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:14:00.961 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.961 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.961 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.220 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.220 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:14:01.220 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.220 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.220 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.479 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.479 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:14:01.479 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.479 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.479 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.739 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.739 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:14:01.739 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.739 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.739 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.309 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.309 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:14:02.309 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.309 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.309 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.569 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.569 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:14:02.569 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.569 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.569 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.829 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.829 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:14:02.829 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.829 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.829 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.088 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.088 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:14:03.088 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.088 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.088 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.347 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.347 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:14:03.347 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.347 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.347 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.606 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:03.866 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.866 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3447475 00:14:03.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3447475) - No such process 00:14:03.866 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3447475 00:14:03.866 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:03.866 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:03.866 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:03.866 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:03.866 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:03.866 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:03.866 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:03.866 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:03.866 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:03.866 rmmod nvme_tcp 00:14:03.866 rmmod nvme_fabrics 00:14:03.866 rmmod nvme_keyring 00:14:03.866 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:03.866 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:03.866 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:03.866 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3447391 ']' 00:14:03.866 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3447391 00:14:03.866 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 3447391 ']' 00:14:03.866 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 3447391 00:14:03.866 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:14:03.866 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:03.866 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3447391 00:14:03.866 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:03.866 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:03.866 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3447391' 00:14:03.866 killing process with pid 3447391 00:14:03.866 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 3447391 00:14:03.866 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 3447391 00:14:04.125 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:04.125 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:04.125 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:04.125 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:04.125 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:04.125 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:04.126 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:04.126 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:04.126 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:04.126 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.126 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.126 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.034 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:06.034 00:14:06.034 real 0m21.658s 00:14:06.034 user 0m43.199s 00:14:06.034 sys 0m9.529s 00:14:06.034 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:06.034 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.034 ************************************ 00:14:06.034 END TEST nvmf_connect_stress 00:14:06.034 ************************************ 00:14:06.034 07:12:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:06.034 07:12:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:06.034 07:12:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:06.034 07:12:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.295 ************************************ 00:14:06.295 START TEST nvmf_fused_ordering 00:14:06.295 ************************************ 00:14:06.295 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:06.295 * Looking for test storage... 00:14:06.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.295 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:06.295 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:14:06.295 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:06.295 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:06.295 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.295 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.295 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.295 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.295 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.295 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.295 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.295 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:06.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.296 --rc genhtml_branch_coverage=1 00:14:06.296 --rc genhtml_function_coverage=1 00:14:06.296 --rc genhtml_legend=1 00:14:06.296 --rc geninfo_all_blocks=1 00:14:06.296 --rc geninfo_unexecuted_blocks=1 00:14:06.296 00:14:06.296 ' 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:06.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.296 --rc genhtml_branch_coverage=1 00:14:06.296 --rc genhtml_function_coverage=1 00:14:06.296 --rc genhtml_legend=1 00:14:06.296 --rc geninfo_all_blocks=1 00:14:06.296 --rc geninfo_unexecuted_blocks=1 00:14:06.296 00:14:06.296 ' 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:06.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.296 --rc genhtml_branch_coverage=1 00:14:06.296 --rc genhtml_function_coverage=1 00:14:06.296 --rc genhtml_legend=1 00:14:06.296 --rc geninfo_all_blocks=1 00:14:06.296 --rc geninfo_unexecuted_blocks=1 00:14:06.296 00:14:06.296 ' 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:06.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.296 --rc genhtml_branch_coverage=1 00:14:06.296 --rc genhtml_function_coverage=1 00:14:06.296 --rc genhtml_legend=1 00:14:06.296 --rc geninfo_all_blocks=1 00:14:06.296 --rc geninfo_unexecuted_blocks=1 00:14:06.296 00:14:06.296 ' 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:06.296 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.297 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:06.297 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:06.297 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:06.297 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.297 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.297 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.297 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:06.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:06.297 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:06.297 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:06.297 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:06.558 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:06.558 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:06.558 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.558 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:06.558 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:06.558 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:06.558 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.558 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.558 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.558 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:06.558 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:06.558 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:06.558 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:14.739 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:14.739 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:14.739 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.739 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:14.740 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.740 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:14.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:14:14.740 00:14:14.740 --- 10.0.0.2 ping statistics --- 00:14:14.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.740 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:14:14.740 00:14:14.740 --- 10.0.0.1 ping statistics --- 00:14:14.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.740 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3453784 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3453784 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 3453784 ']' 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:14.740 [2024-11-20 07:12:36.149564] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:14:14.740 [2024-11-20 07:12:36.149631] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.740 [2024-11-20 07:12:36.251416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.740 [2024-11-20 07:12:36.301895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.740 [2024-11-20 07:12:36.301940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.740 [2024-11-20 07:12:36.301948] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.740 [2024-11-20 07:12:36.301955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.740 [2024-11-20 07:12:36.301961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.740 [2024-11-20 07:12:36.302761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:14.740 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 [2024-11-20 07:12:37.024345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 [2024-11-20 07:12:37.048643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 NULL1 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.003 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:15.003 [2024-11-20 07:12:37.117739] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:14:15.003 [2024-11-20 07:12:37.117784] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454131 ] 00:14:15.573 Attached to nqn.2016-06.io.spdk:cnode1 00:14:15.573 Namespace ID: 1 size: 1GB 00:14:15.573 fused_ordering(0) 00:14:15.573 fused_ordering(1) 00:14:15.573 fused_ordering(2) 00:14:15.573 fused_ordering(3) 00:14:15.573 fused_ordering(4) 00:14:15.573 fused_ordering(5) 00:14:15.573 fused_ordering(6) 00:14:15.573 fused_ordering(7) 00:14:15.573 fused_ordering(8) 00:14:15.574 fused_ordering(9) 00:14:15.574 fused_ordering(10) 00:14:15.574 fused_ordering(11) 00:14:15.574 fused_ordering(12) 00:14:15.574 fused_ordering(13) 00:14:15.574 fused_ordering(14) 00:14:15.574 fused_ordering(15) 00:14:15.574 fused_ordering(16) 00:14:15.574 fused_ordering(17) 00:14:15.574 fused_ordering(18) 00:14:15.574 fused_ordering(19) 00:14:15.574 fused_ordering(20) 00:14:15.574 fused_ordering(21) 00:14:15.574 fused_ordering(22) 00:14:15.574 fused_ordering(23) 00:14:15.574 fused_ordering(24) 00:14:15.574 fused_ordering(25) 00:14:15.574 fused_ordering(26) 00:14:15.574 fused_ordering(27) 00:14:15.574 fused_ordering(28) 00:14:15.574 fused_ordering(29) 00:14:15.574 fused_ordering(30) 00:14:15.574 fused_ordering(31) 00:14:15.574 fused_ordering(32) 00:14:15.574 fused_ordering(33) 00:14:15.574 fused_ordering(34) 00:14:15.574 fused_ordering(35) 00:14:15.574 fused_ordering(36) 00:14:15.574 fused_ordering(37) 00:14:15.574 fused_ordering(38) 00:14:15.574 fused_ordering(39) 00:14:15.574 fused_ordering(40) 00:14:15.574 fused_ordering(41) 00:14:15.574 fused_ordering(42) 00:14:15.574 fused_ordering(43) 00:14:15.574 fused_ordering(44) 00:14:15.574 fused_ordering(45) 00:14:15.574 fused_ordering(46) 00:14:15.574 fused_ordering(47) 00:14:15.574 fused_ordering(48) 00:14:15.574 fused_ordering(49) 00:14:15.574 fused_ordering(50) 00:14:15.574 fused_ordering(51) 00:14:15.574 fused_ordering(52) 00:14:15.574 fused_ordering(53) 00:14:15.574 fused_ordering(54) 00:14:15.574 fused_ordering(55) 00:14:15.574 fused_ordering(56) 00:14:15.574 fused_ordering(57) 00:14:15.574 fused_ordering(58) 00:14:15.574 fused_ordering(59) 00:14:15.574 fused_ordering(60) 00:14:15.574 fused_ordering(61) 00:14:15.574 fused_ordering(62) 00:14:15.574 fused_ordering(63) 00:14:15.574 fused_ordering(64) 00:14:15.574 fused_ordering(65) 00:14:15.574 fused_ordering(66) 00:14:15.574 fused_ordering(67) 00:14:15.574 fused_ordering(68) 00:14:15.574 fused_ordering(69) 00:14:15.574 fused_ordering(70) 00:14:15.574 fused_ordering(71) 00:14:15.574 fused_ordering(72) 00:14:15.574 fused_ordering(73) 00:14:15.574 fused_ordering(74) 00:14:15.574 fused_ordering(75) 00:14:15.574 fused_ordering(76) 00:14:15.574 fused_ordering(77) 00:14:15.574 fused_ordering(78) 00:14:15.574 fused_ordering(79) 00:14:15.574 fused_ordering(80) 00:14:15.574 fused_ordering(81) 00:14:15.574 fused_ordering(82) 00:14:15.574 fused_ordering(83) 00:14:15.574 fused_ordering(84) 00:14:15.574 fused_ordering(85) 00:14:15.574 fused_ordering(86) 00:14:15.574 fused_ordering(87) 00:14:15.574 fused_ordering(88) 00:14:15.574 fused_ordering(89) 00:14:15.574 fused_ordering(90) 00:14:15.574 fused_ordering(91) 00:14:15.574 fused_ordering(92) 00:14:15.574 fused_ordering(93) 00:14:15.574 fused_ordering(94) 00:14:15.574 fused_ordering(95) 00:14:15.574 fused_ordering(96) 00:14:15.574 fused_ordering(97) 00:14:15.574 fused_ordering(98) 00:14:15.574 fused_ordering(99) 00:14:15.574 fused_ordering(100) 00:14:15.574 fused_ordering(101) 00:14:15.574 fused_ordering(102) 00:14:15.574 fused_ordering(103) 00:14:15.574 fused_ordering(104) 00:14:15.574 fused_ordering(105) 00:14:15.574 fused_ordering(106) 00:14:15.574 fused_ordering(107) 00:14:15.574 fused_ordering(108) 00:14:15.574 fused_ordering(109) 00:14:15.574 fused_ordering(110) 00:14:15.574 fused_ordering(111) 00:14:15.574 fused_ordering(112) 00:14:15.574 fused_ordering(113) 00:14:15.574 fused_ordering(114) 00:14:15.574 fused_ordering(115) 00:14:15.574 fused_ordering(116) 00:14:15.574 fused_ordering(117) 00:14:15.574 fused_ordering(118) 00:14:15.574 fused_ordering(119) 00:14:15.574 fused_ordering(120) 00:14:15.574 fused_ordering(121) 00:14:15.574 fused_ordering(122) 00:14:15.574 fused_ordering(123) 00:14:15.574 fused_ordering(124) 00:14:15.574 fused_ordering(125) 00:14:15.574 fused_ordering(126) 00:14:15.574 fused_ordering(127) 00:14:15.574 fused_ordering(128) 00:14:15.574 fused_ordering(129) 00:14:15.574 fused_ordering(130) 00:14:15.574 fused_ordering(131) 00:14:15.574 fused_ordering(132) 00:14:15.574 fused_ordering(133) 00:14:15.574 fused_ordering(134) 00:14:15.574 fused_ordering(135) 00:14:15.574 fused_ordering(136) 00:14:15.574 fused_ordering(137) 00:14:15.574 fused_ordering(138) 00:14:15.574 fused_ordering(139) 00:14:15.574 fused_ordering(140) 00:14:15.574 fused_ordering(141) 00:14:15.574 fused_ordering(142) 00:14:15.574 fused_ordering(143) 00:14:15.574 fused_ordering(144) 00:14:15.574 fused_ordering(145) 00:14:15.574 fused_ordering(146) 00:14:15.574 fused_ordering(147) 00:14:15.574 fused_ordering(148) 00:14:15.574 fused_ordering(149) 00:14:15.574 fused_ordering(150) 00:14:15.574 fused_ordering(151) 00:14:15.574 fused_ordering(152) 00:14:15.574 fused_ordering(153) 00:14:15.574 fused_ordering(154) 00:14:15.574 fused_ordering(155) 00:14:15.574 fused_ordering(156) 00:14:15.574 fused_ordering(157) 00:14:15.574 fused_ordering(158) 00:14:15.574 fused_ordering(159) 00:14:15.574 fused_ordering(160) 00:14:15.574 fused_ordering(161) 00:14:15.574 fused_ordering(162) 00:14:15.574 fused_ordering(163) 00:14:15.574 fused_ordering(164) 00:14:15.574 fused_ordering(165) 00:14:15.574 fused_ordering(166) 00:14:15.574 fused_ordering(167) 00:14:15.574 fused_ordering(168) 00:14:15.574 fused_ordering(169) 00:14:15.574 fused_ordering(170) 00:14:15.574 fused_ordering(171) 00:14:15.574 fused_ordering(172) 00:14:15.574 fused_ordering(173) 00:14:15.574 fused_ordering(174) 00:14:15.574 fused_ordering(175) 00:14:15.574 fused_ordering(176) 00:14:15.574 fused_ordering(177) 00:14:15.574 fused_ordering(178) 00:14:15.574 fused_ordering(179) 00:14:15.574 fused_ordering(180) 00:14:15.574 fused_ordering(181) 00:14:15.574 fused_ordering(182) 00:14:15.574 fused_ordering(183) 00:14:15.574 fused_ordering(184) 00:14:15.574 fused_ordering(185) 00:14:15.574 fused_ordering(186) 00:14:15.574 fused_ordering(187) 00:14:15.574 fused_ordering(188) 00:14:15.574 fused_ordering(189) 00:14:15.574 fused_ordering(190) 00:14:15.574 fused_ordering(191) 00:14:15.574 fused_ordering(192) 00:14:15.574 fused_ordering(193) 00:14:15.574 fused_ordering(194) 00:14:15.574 fused_ordering(195) 00:14:15.574 fused_ordering(196) 00:14:15.574 fused_ordering(197) 00:14:15.574 fused_ordering(198) 00:14:15.574 fused_ordering(199) 00:14:15.574 fused_ordering(200) 00:14:15.574 fused_ordering(201) 00:14:15.574 fused_ordering(202) 00:14:15.574 fused_ordering(203) 00:14:15.574 fused_ordering(204) 00:14:15.574 fused_ordering(205) 00:14:15.835 fused_ordering(206) 00:14:15.835 fused_ordering(207) 00:14:15.835 fused_ordering(208) 00:14:15.835 fused_ordering(209) 00:14:15.835 fused_ordering(210) 00:14:15.835 fused_ordering(211) 00:14:15.835 fused_ordering(212) 00:14:15.835 fused_ordering(213) 00:14:15.835 fused_ordering(214) 00:14:15.835 fused_ordering(215) 00:14:15.835 fused_ordering(216) 00:14:15.835 fused_ordering(217) 00:14:15.835 fused_ordering(218) 00:14:15.835 fused_ordering(219) 00:14:15.835 fused_ordering(220) 00:14:15.835 fused_ordering(221) 00:14:15.835 fused_ordering(222) 00:14:15.835 fused_ordering(223) 00:14:15.835 fused_ordering(224) 00:14:15.835 fused_ordering(225) 00:14:15.835 fused_ordering(226) 00:14:15.835 fused_ordering(227) 00:14:15.835 fused_ordering(228) 00:14:15.835 fused_ordering(229) 00:14:15.835 fused_ordering(230) 00:14:15.835 fused_ordering(231) 00:14:15.835 fused_ordering(232) 00:14:15.835 fused_ordering(233) 00:14:15.835 fused_ordering(234) 00:14:15.835 fused_ordering(235) 00:14:15.835 fused_ordering(236) 00:14:15.835 fused_ordering(237) 00:14:15.835 fused_ordering(238) 00:14:15.835 fused_ordering(239) 00:14:15.835 fused_ordering(240) 00:14:15.835 fused_ordering(241) 00:14:15.835 fused_ordering(242) 00:14:15.835 fused_ordering(243) 00:14:15.835 fused_ordering(244) 00:14:15.835 fused_ordering(245) 00:14:15.835 fused_ordering(246) 00:14:15.835 fused_ordering(247) 00:14:15.835 fused_ordering(248) 00:14:15.835 fused_ordering(249) 00:14:15.835 fused_ordering(250) 00:14:15.835 fused_ordering(251) 00:14:15.835 fused_ordering(252) 00:14:15.835 fused_ordering(253) 00:14:15.835 fused_ordering(254) 00:14:15.836 fused_ordering(255) 00:14:15.836 fused_ordering(256) 00:14:15.836 fused_ordering(257) 00:14:15.836 fused_ordering(258) 00:14:15.836 fused_ordering(259) 00:14:15.836 fused_ordering(260) 00:14:15.836 fused_ordering(261) 00:14:15.836 fused_ordering(262) 00:14:15.836 fused_ordering(263) 00:14:15.836 fused_ordering(264) 00:14:15.836 fused_ordering(265) 00:14:15.836 fused_ordering(266) 00:14:15.836 fused_ordering(267) 00:14:15.836 fused_ordering(268) 00:14:15.836 fused_ordering(269) 00:14:15.836 fused_ordering(270) 00:14:15.836 fused_ordering(271) 00:14:15.836 fused_ordering(272) 00:14:15.836 fused_ordering(273) 00:14:15.836 fused_ordering(274) 00:14:15.836 fused_ordering(275) 00:14:15.836 fused_ordering(276) 00:14:15.836 fused_ordering(277) 00:14:15.836 fused_ordering(278) 00:14:15.836 fused_ordering(279) 00:14:15.836 fused_ordering(280) 00:14:15.836 fused_ordering(281) 00:14:15.836 fused_ordering(282) 00:14:15.836 fused_ordering(283) 00:14:15.836 fused_ordering(284) 00:14:15.836 fused_ordering(285) 00:14:15.836 fused_ordering(286) 00:14:15.836 fused_ordering(287) 00:14:15.836 fused_ordering(288) 00:14:15.836 fused_ordering(289) 00:14:15.836 fused_ordering(290) 00:14:15.836 fused_ordering(291) 00:14:15.836 fused_ordering(292) 00:14:15.836 fused_ordering(293) 00:14:15.836 fused_ordering(294) 00:14:15.836 fused_ordering(295) 00:14:15.836 fused_ordering(296) 00:14:15.836 fused_ordering(297) 00:14:15.836 fused_ordering(298) 00:14:15.836 fused_ordering(299) 00:14:15.836 fused_ordering(300) 00:14:15.836 fused_ordering(301) 00:14:15.836 fused_ordering(302) 00:14:15.836 fused_ordering(303) 00:14:15.836 fused_ordering(304) 00:14:15.836 fused_ordering(305) 00:14:15.836 fused_ordering(306) 00:14:15.836 fused_ordering(307) 00:14:15.836 fused_ordering(308) 00:14:15.836 fused_ordering(309) 00:14:15.836 fused_ordering(310) 00:14:15.836 fused_ordering(311) 00:14:15.836 fused_ordering(312) 00:14:15.836 fused_ordering(313) 00:14:15.836 fused_ordering(314) 00:14:15.836 fused_ordering(315) 00:14:15.836 fused_ordering(316) 00:14:15.836 fused_ordering(317) 00:14:15.836 fused_ordering(318) 00:14:15.836 fused_ordering(319) 00:14:15.836 fused_ordering(320) 00:14:15.836 fused_ordering(321) 00:14:15.836 fused_ordering(322) 00:14:15.836 fused_ordering(323) 00:14:15.836 fused_ordering(324) 00:14:15.836 fused_ordering(325) 00:14:15.836 fused_ordering(326) 00:14:15.836 fused_ordering(327) 00:14:15.836 fused_ordering(328) 00:14:15.836 fused_ordering(329) 00:14:15.836 fused_ordering(330) 00:14:15.836 fused_ordering(331) 00:14:15.836 fused_ordering(332) 00:14:15.836 fused_ordering(333) 00:14:15.836 fused_ordering(334) 00:14:15.836 fused_ordering(335) 00:14:15.836 fused_ordering(336) 00:14:15.836 fused_ordering(337) 00:14:15.836 fused_ordering(338) 00:14:15.836 fused_ordering(339) 00:14:15.836 fused_ordering(340) 00:14:15.836 fused_ordering(341) 00:14:15.836 fused_ordering(342) 00:14:15.836 fused_ordering(343) 00:14:15.836 fused_ordering(344) 00:14:15.836 fused_ordering(345) 00:14:15.836 fused_ordering(346) 00:14:15.836 fused_ordering(347) 00:14:15.836 fused_ordering(348) 00:14:15.836 fused_ordering(349) 00:14:15.836 fused_ordering(350) 00:14:15.836 fused_ordering(351) 00:14:15.836 fused_ordering(352) 00:14:15.836 fused_ordering(353) 00:14:15.836 fused_ordering(354) 00:14:15.836 fused_ordering(355) 00:14:15.836 fused_ordering(356) 00:14:15.836 fused_ordering(357) 00:14:15.836 fused_ordering(358) 00:14:15.836 fused_ordering(359) 00:14:15.836 fused_ordering(360) 00:14:15.836 fused_ordering(361) 00:14:15.836 fused_ordering(362) 00:14:15.836 fused_ordering(363) 00:14:15.836 fused_ordering(364) 00:14:15.836 fused_ordering(365) 00:14:15.836 fused_ordering(366) 00:14:15.836 fused_ordering(367) 00:14:15.836 fused_ordering(368) 00:14:15.836 fused_ordering(369) 00:14:15.836 fused_ordering(370) 00:14:15.836 fused_ordering(371) 00:14:15.836 fused_ordering(372) 00:14:15.836 fused_ordering(373) 00:14:15.836 fused_ordering(374) 00:14:15.836 fused_ordering(375) 00:14:15.836 fused_ordering(376) 00:14:15.836 fused_ordering(377) 00:14:15.836 fused_ordering(378) 00:14:15.836 fused_ordering(379) 00:14:15.836 fused_ordering(380) 00:14:15.836 fused_ordering(381) 00:14:15.836 fused_ordering(382) 00:14:15.836 fused_ordering(383) 00:14:15.836 fused_ordering(384) 00:14:15.836 fused_ordering(385) 00:14:15.836 fused_ordering(386) 00:14:15.836 fused_ordering(387) 00:14:15.836 fused_ordering(388) 00:14:15.836 fused_ordering(389) 00:14:15.836 fused_ordering(390) 00:14:15.836 fused_ordering(391) 00:14:15.836 fused_ordering(392) 00:14:15.836 fused_ordering(393) 00:14:15.836 fused_ordering(394) 00:14:15.836 fused_ordering(395) 00:14:15.836 fused_ordering(396) 00:14:15.836 fused_ordering(397) 00:14:15.836 fused_ordering(398) 00:14:15.836 fused_ordering(399) 00:14:15.836 fused_ordering(400) 00:14:15.836 fused_ordering(401) 00:14:15.836 fused_ordering(402) 00:14:15.836 fused_ordering(403) 00:14:15.836 fused_ordering(404) 00:14:15.836 fused_ordering(405) 00:14:15.836 fused_ordering(406) 00:14:15.836 fused_ordering(407) 00:14:15.836 fused_ordering(408) 00:14:15.836 fused_ordering(409) 00:14:15.836 fused_ordering(410) 00:14:16.097 fused_ordering(411) 00:14:16.097 fused_ordering(412) 00:14:16.097 fused_ordering(413) 00:14:16.097 fused_ordering(414) 00:14:16.097 fused_ordering(415) 00:14:16.097 fused_ordering(416) 00:14:16.097 fused_ordering(417) 00:14:16.097 fused_ordering(418) 00:14:16.097 fused_ordering(419) 00:14:16.097 fused_ordering(420) 00:14:16.097 fused_ordering(421) 00:14:16.097 fused_ordering(422) 00:14:16.097 fused_ordering(423) 00:14:16.097 fused_ordering(424) 00:14:16.097 fused_ordering(425) 00:14:16.097 fused_ordering(426) 00:14:16.097 fused_ordering(427) 00:14:16.097 fused_ordering(428) 00:14:16.097 fused_ordering(429) 00:14:16.097 fused_ordering(430) 00:14:16.097 fused_ordering(431) 00:14:16.097 fused_ordering(432) 00:14:16.097 fused_ordering(433) 00:14:16.097 fused_ordering(434) 00:14:16.097 fused_ordering(435) 00:14:16.097 fused_ordering(436) 00:14:16.097 fused_ordering(437) 00:14:16.097 fused_ordering(438) 00:14:16.097 fused_ordering(439) 00:14:16.097 fused_ordering(440) 00:14:16.097 fused_ordering(441) 00:14:16.097 fused_ordering(442) 00:14:16.097 fused_ordering(443) 00:14:16.097 fused_ordering(444) 00:14:16.097 fused_ordering(445) 00:14:16.097 fused_ordering(446) 00:14:16.097 fused_ordering(447) 00:14:16.097 fused_ordering(448) 00:14:16.097 fused_ordering(449) 00:14:16.097 fused_ordering(450) 00:14:16.097 fused_ordering(451) 00:14:16.097 fused_ordering(452) 00:14:16.097 fused_ordering(453) 00:14:16.097 fused_ordering(454) 00:14:16.097 fused_ordering(455) 00:14:16.097 fused_ordering(456) 00:14:16.097 fused_ordering(457) 00:14:16.097 fused_ordering(458) 00:14:16.097 fused_ordering(459) 00:14:16.097 fused_ordering(460) 00:14:16.097 fused_ordering(461) 00:14:16.097 fused_ordering(462) 00:14:16.097 fused_ordering(463) 00:14:16.097 fused_ordering(464) 00:14:16.097 fused_ordering(465) 00:14:16.097 fused_ordering(466) 00:14:16.097 fused_ordering(467) 00:14:16.097 fused_ordering(468) 00:14:16.097 fused_ordering(469) 00:14:16.097 fused_ordering(470) 00:14:16.097 fused_ordering(471) 00:14:16.097 fused_ordering(472) 00:14:16.097 fused_ordering(473) 00:14:16.097 fused_ordering(474) 00:14:16.097 fused_ordering(475) 00:14:16.097 fused_ordering(476) 00:14:16.097 fused_ordering(477) 00:14:16.097 fused_ordering(478) 00:14:16.097 fused_ordering(479) 00:14:16.097 fused_ordering(480) 00:14:16.097 fused_ordering(481) 00:14:16.097 fused_ordering(482) 00:14:16.097 fused_ordering(483) 00:14:16.097 fused_ordering(484) 00:14:16.097 fused_ordering(485) 00:14:16.097 fused_ordering(486) 00:14:16.097 fused_ordering(487) 00:14:16.097 fused_ordering(488) 00:14:16.097 fused_ordering(489) 00:14:16.097 fused_ordering(490) 00:14:16.097 fused_ordering(491) 00:14:16.097 fused_ordering(492) 00:14:16.097 fused_ordering(493) 00:14:16.097 fused_ordering(494) 00:14:16.097 fused_ordering(495) 00:14:16.097 fused_ordering(496) 00:14:16.097 fused_ordering(497) 00:14:16.097 fused_ordering(498) 00:14:16.097 fused_ordering(499) 00:14:16.097 fused_ordering(500) 00:14:16.097 fused_ordering(501) 00:14:16.097 fused_ordering(502) 00:14:16.097 fused_ordering(503) 00:14:16.097 fused_ordering(504) 00:14:16.097 fused_ordering(505) 00:14:16.097 fused_ordering(506) 00:14:16.097 fused_ordering(507) 00:14:16.097 fused_ordering(508) 00:14:16.097 fused_ordering(509) 00:14:16.097 fused_ordering(510) 00:14:16.097 fused_ordering(511) 00:14:16.097 fused_ordering(512) 00:14:16.097 fused_ordering(513) 00:14:16.097 fused_ordering(514) 00:14:16.097 fused_ordering(515) 00:14:16.097 fused_ordering(516) 00:14:16.097 fused_ordering(517) 00:14:16.097 fused_ordering(518) 00:14:16.097 fused_ordering(519) 00:14:16.097 fused_ordering(520) 00:14:16.097 fused_ordering(521) 00:14:16.097 fused_ordering(522) 00:14:16.097 fused_ordering(523) 00:14:16.097 fused_ordering(524) 00:14:16.097 fused_ordering(525) 00:14:16.097 fused_ordering(526) 00:14:16.097 fused_ordering(527) 00:14:16.097 fused_ordering(528) 00:14:16.097 fused_ordering(529) 00:14:16.097 fused_ordering(530) 00:14:16.097 fused_ordering(531) 00:14:16.097 fused_ordering(532) 00:14:16.097 fused_ordering(533) 00:14:16.097 fused_ordering(534) 00:14:16.097 fused_ordering(535) 00:14:16.097 fused_ordering(536) 00:14:16.097 fused_ordering(537) 00:14:16.097 fused_ordering(538) 00:14:16.097 fused_ordering(539) 00:14:16.097 fused_ordering(540) 00:14:16.097 fused_ordering(541) 00:14:16.097 fused_ordering(542) 00:14:16.097 fused_ordering(543) 00:14:16.097 fused_ordering(544) 00:14:16.097 fused_ordering(545) 00:14:16.097 fused_ordering(546) 00:14:16.097 fused_ordering(547) 00:14:16.097 fused_ordering(548) 00:14:16.097 fused_ordering(549) 00:14:16.097 fused_ordering(550) 00:14:16.097 fused_ordering(551) 00:14:16.097 fused_ordering(552) 00:14:16.097 fused_ordering(553) 00:14:16.097 fused_ordering(554) 00:14:16.097 fused_ordering(555) 00:14:16.097 fused_ordering(556) 00:14:16.097 fused_ordering(557) 00:14:16.097 fused_ordering(558) 00:14:16.097 fused_ordering(559) 00:14:16.097 fused_ordering(560) 00:14:16.097 fused_ordering(561) 00:14:16.098 fused_ordering(562) 00:14:16.098 fused_ordering(563) 00:14:16.098 fused_ordering(564) 00:14:16.098 fused_ordering(565) 00:14:16.098 fused_ordering(566) 00:14:16.098 fused_ordering(567) 00:14:16.098 fused_ordering(568) 00:14:16.098 fused_ordering(569) 00:14:16.098 fused_ordering(570) 00:14:16.098 fused_ordering(571) 00:14:16.098 fused_ordering(572) 00:14:16.098 fused_ordering(573) 00:14:16.098 fused_ordering(574) 00:14:16.098 fused_ordering(575) 00:14:16.098 fused_ordering(576) 00:14:16.098 fused_ordering(577) 00:14:16.098 fused_ordering(578) 00:14:16.098 fused_ordering(579) 00:14:16.098 fused_ordering(580) 00:14:16.098 fused_ordering(581) 00:14:16.098 fused_ordering(582) 00:14:16.098 fused_ordering(583) 00:14:16.098 fused_ordering(584) 00:14:16.098 fused_ordering(585) 00:14:16.098 fused_ordering(586) 00:14:16.098 fused_ordering(587) 00:14:16.098 fused_ordering(588) 00:14:16.098 fused_ordering(589) 00:14:16.098 fused_ordering(590) 00:14:16.098 fused_ordering(591) 00:14:16.098 fused_ordering(592) 00:14:16.098 fused_ordering(593) 00:14:16.098 fused_ordering(594) 00:14:16.098 fused_ordering(595) 00:14:16.098 fused_ordering(596) 00:14:16.098 fused_ordering(597) 00:14:16.098 fused_ordering(598) 00:14:16.098 fused_ordering(599) 00:14:16.098 fused_ordering(600) 00:14:16.098 fused_ordering(601) 00:14:16.098 fused_ordering(602) 00:14:16.098 fused_ordering(603) 00:14:16.098 fused_ordering(604) 00:14:16.098 fused_ordering(605) 00:14:16.098 fused_ordering(606) 00:14:16.098 fused_ordering(607) 00:14:16.098 fused_ordering(608) 00:14:16.098 fused_ordering(609) 00:14:16.098 fused_ordering(610) 00:14:16.098 fused_ordering(611) 00:14:16.098 fused_ordering(612) 00:14:16.098 fused_ordering(613) 00:14:16.098 fused_ordering(614) 00:14:16.098 fused_ordering(615) 00:14:16.670 fused_ordering(616) 00:14:16.670 fused_ordering(617) 00:14:16.670 fused_ordering(618) 00:14:16.670 fused_ordering(619) 00:14:16.670 fused_ordering(620) 00:14:16.670 fused_ordering(621) 00:14:16.670 fused_ordering(622) 00:14:16.670 fused_ordering(623) 00:14:16.670 fused_ordering(624) 00:14:16.670 fused_ordering(625) 00:14:16.670 fused_ordering(626) 00:14:16.670 fused_ordering(627) 00:14:16.670 fused_ordering(628) 00:14:16.670 fused_ordering(629) 00:14:16.670 fused_ordering(630) 00:14:16.670 fused_ordering(631) 00:14:16.670 fused_ordering(632) 00:14:16.670 fused_ordering(633) 00:14:16.670 fused_ordering(634) 00:14:16.670 fused_ordering(635) 00:14:16.670 fused_ordering(636) 00:14:16.670 fused_ordering(637) 00:14:16.670 fused_ordering(638) 00:14:16.670 fused_ordering(639) 00:14:16.670 fused_ordering(640) 00:14:16.670 fused_ordering(641) 00:14:16.670 fused_ordering(642) 00:14:16.670 fused_ordering(643) 00:14:16.670 fused_ordering(644) 00:14:16.670 fused_ordering(645) 00:14:16.670 fused_ordering(646) 00:14:16.670 fused_ordering(647) 00:14:16.670 fused_ordering(648) 00:14:16.670 fused_ordering(649) 00:14:16.670 fused_ordering(650) 00:14:16.670 fused_ordering(651) 00:14:16.670 fused_ordering(652) 00:14:16.670 fused_ordering(653) 00:14:16.670 fused_ordering(654) 00:14:16.670 fused_ordering(655) 00:14:16.670 fused_ordering(656) 00:14:16.670 fused_ordering(657) 00:14:16.670 fused_ordering(658) 00:14:16.670 fused_ordering(659) 00:14:16.670 fused_ordering(660) 00:14:16.670 fused_ordering(661) 00:14:16.670 fused_ordering(662) 00:14:16.670 fused_ordering(663) 00:14:16.670 fused_ordering(664) 00:14:16.670 fused_ordering(665) 00:14:16.670 fused_ordering(666) 00:14:16.670 fused_ordering(667) 00:14:16.670 fused_ordering(668) 00:14:16.670 fused_ordering(669) 00:14:16.670 fused_ordering(670) 00:14:16.670 fused_ordering(671) 00:14:16.670 fused_ordering(672) 00:14:16.670 fused_ordering(673) 00:14:16.670 fused_ordering(674) 00:14:16.670 fused_ordering(675) 00:14:16.670 fused_ordering(676) 00:14:16.670 fused_ordering(677) 00:14:16.670 fused_ordering(678) 00:14:16.670 fused_ordering(679) 00:14:16.670 fused_ordering(680) 00:14:16.670 fused_ordering(681) 00:14:16.670 fused_ordering(682) 00:14:16.670 fused_ordering(683) 00:14:16.670 fused_ordering(684) 00:14:16.670 fused_ordering(685) 00:14:16.670 fused_ordering(686) 00:14:16.670 fused_ordering(687) 00:14:16.670 fused_ordering(688) 00:14:16.670 fused_ordering(689) 00:14:16.670 fused_ordering(690) 00:14:16.670 fused_ordering(691) 00:14:16.670 fused_ordering(692) 00:14:16.670 fused_ordering(693) 00:14:16.670 fused_ordering(694) 00:14:16.670 fused_ordering(695) 00:14:16.670 fused_ordering(696) 00:14:16.670 fused_ordering(697) 00:14:16.670 fused_ordering(698) 00:14:16.670 fused_ordering(699) 00:14:16.670 fused_ordering(700) 00:14:16.670 fused_ordering(701) 00:14:16.670 fused_ordering(702) 00:14:16.670 fused_ordering(703) 00:14:16.670 fused_ordering(704) 00:14:16.670 fused_ordering(705) 00:14:16.670 fused_ordering(706) 00:14:16.670 fused_ordering(707) 00:14:16.670 fused_ordering(708) 00:14:16.670 fused_ordering(709) 00:14:16.670 fused_ordering(710) 00:14:16.670 fused_ordering(711) 00:14:16.670 fused_ordering(712) 00:14:16.670 fused_ordering(713) 00:14:16.670 fused_ordering(714) 00:14:16.670 fused_ordering(715) 00:14:16.670 fused_ordering(716) 00:14:16.670 fused_ordering(717) 00:14:16.670 fused_ordering(718) 00:14:16.670 fused_ordering(719) 00:14:16.670 fused_ordering(720) 00:14:16.671 fused_ordering(721) 00:14:16.671 fused_ordering(722) 00:14:16.671 fused_ordering(723) 00:14:16.671 fused_ordering(724) 00:14:16.671 fused_ordering(725) 00:14:16.671 fused_ordering(726) 00:14:16.671 fused_ordering(727) 00:14:16.671 fused_ordering(728) 00:14:16.671 fused_ordering(729) 00:14:16.671 fused_ordering(730) 00:14:16.671 fused_ordering(731) 00:14:16.671 fused_ordering(732) 00:14:16.671 fused_ordering(733) 00:14:16.671 fused_ordering(734) 00:14:16.671 fused_ordering(735) 00:14:16.671 fused_ordering(736) 00:14:16.671 fused_ordering(737) 00:14:16.671 fused_ordering(738) 00:14:16.671 fused_ordering(739) 00:14:16.671 fused_ordering(740) 00:14:16.671 fused_ordering(741) 00:14:16.671 fused_ordering(742) 00:14:16.671 fused_ordering(743) 00:14:16.671 fused_ordering(744) 00:14:16.671 fused_ordering(745) 00:14:16.671 fused_ordering(746) 00:14:16.671 fused_ordering(747) 00:14:16.671 fused_ordering(748) 00:14:16.671 fused_ordering(749) 00:14:16.671 fused_ordering(750) 00:14:16.671 fused_ordering(751) 00:14:16.671 fused_ordering(752) 00:14:16.671 fused_ordering(753) 00:14:16.671 fused_ordering(754) 00:14:16.671 fused_ordering(755) 00:14:16.671 fused_ordering(756) 00:14:16.671 fused_ordering(757) 00:14:16.671 fused_ordering(758) 00:14:16.671 fused_ordering(759) 00:14:16.671 fused_ordering(760) 00:14:16.671 fused_ordering(761) 00:14:16.671 fused_ordering(762) 00:14:16.671 fused_ordering(763) 00:14:16.671 fused_ordering(764) 00:14:16.671 fused_ordering(765) 00:14:16.671 fused_ordering(766) 00:14:16.671 fused_ordering(767) 00:14:16.671 fused_ordering(768) 00:14:16.671 fused_ordering(769) 00:14:16.671 fused_ordering(770) 00:14:16.671 fused_ordering(771) 00:14:16.671 fused_ordering(772) 00:14:16.671 fused_ordering(773) 00:14:16.671 fused_ordering(774) 00:14:16.671 fused_ordering(775) 00:14:16.671 fused_ordering(776) 00:14:16.671 fused_ordering(777) 00:14:16.671 fused_ordering(778) 00:14:16.671 fused_ordering(779) 00:14:16.671 fused_ordering(780) 00:14:16.671 fused_ordering(781) 00:14:16.671 fused_ordering(782) 00:14:16.671 fused_ordering(783) 00:14:16.671 fused_ordering(784) 00:14:16.671 fused_ordering(785) 00:14:16.671 fused_ordering(786) 00:14:16.671 fused_ordering(787) 00:14:16.671 fused_ordering(788) 00:14:16.671 fused_ordering(789) 00:14:16.671 fused_ordering(790) 00:14:16.671 fused_ordering(791) 00:14:16.671 fused_ordering(792) 00:14:16.671 fused_ordering(793) 00:14:16.671 fused_ordering(794) 00:14:16.671 fused_ordering(795) 00:14:16.671 fused_ordering(796) 00:14:16.671 fused_ordering(797) 00:14:16.671 fused_ordering(798) 00:14:16.671 fused_ordering(799) 00:14:16.671 fused_ordering(800) 00:14:16.671 fused_ordering(801) 00:14:16.671 fused_ordering(802) 00:14:16.671 fused_ordering(803) 00:14:16.671 fused_ordering(804) 00:14:16.671 fused_ordering(805) 00:14:16.671 fused_ordering(806) 00:14:16.671 fused_ordering(807) 00:14:16.671 fused_ordering(808) 00:14:16.671 fused_ordering(809) 00:14:16.671 fused_ordering(810) 00:14:16.671 fused_ordering(811) 00:14:16.671 fused_ordering(812) 00:14:16.671 fused_ordering(813) 00:14:16.671 fused_ordering(814) 00:14:16.671 fused_ordering(815) 00:14:16.671 fused_ordering(816) 00:14:16.671 fused_ordering(817) 00:14:16.671 fused_ordering(818) 00:14:16.671 fused_ordering(819) 00:14:16.671 fused_ordering(820) 00:14:17.242 fused_ordering(821) 00:14:17.242 fused_ordering(822) 00:14:17.242 fused_ordering(823) 00:14:17.242 fused_ordering(824) 00:14:17.242 fused_ordering(825) 00:14:17.242 fused_ordering(826) 00:14:17.242 fused_ordering(827) 00:14:17.242 fused_ordering(828) 00:14:17.242 fused_ordering(829) 00:14:17.242 fused_ordering(830) 00:14:17.242 fused_ordering(831) 00:14:17.242 fused_ordering(832) 00:14:17.242 fused_ordering(833) 00:14:17.242 fused_ordering(834) 00:14:17.242 fused_ordering(835) 00:14:17.242 fused_ordering(836) 00:14:17.242 fused_ordering(837) 00:14:17.242 fused_ordering(838) 00:14:17.242 fused_ordering(839) 00:14:17.242 fused_ordering(840) 00:14:17.242 fused_ordering(841) 00:14:17.242 fused_ordering(842) 00:14:17.242 fused_ordering(843) 00:14:17.242 fused_ordering(844) 00:14:17.242 fused_ordering(845) 00:14:17.242 fused_ordering(846) 00:14:17.242 fused_ordering(847) 00:14:17.242 fused_ordering(848) 00:14:17.242 fused_ordering(849) 00:14:17.242 fused_ordering(850) 00:14:17.242 fused_ordering(851) 00:14:17.242 fused_ordering(852) 00:14:17.242 fused_ordering(853) 00:14:17.242 fused_ordering(854) 00:14:17.242 fused_ordering(855) 00:14:17.242 fused_ordering(856) 00:14:17.242 fused_ordering(857) 00:14:17.242 fused_ordering(858) 00:14:17.242 fused_ordering(859) 00:14:17.242 fused_ordering(860) 00:14:17.242 fused_ordering(861) 00:14:17.242 fused_ordering(862) 00:14:17.242 fused_ordering(863) 00:14:17.242 fused_ordering(864) 00:14:17.242 fused_ordering(865) 00:14:17.242 fused_ordering(866) 00:14:17.242 fused_ordering(867) 00:14:17.242 fused_ordering(868) 00:14:17.242 fused_ordering(869) 00:14:17.242 fused_ordering(870) 00:14:17.242 fused_ordering(871) 00:14:17.242 fused_ordering(872) 00:14:17.242 fused_ordering(873) 00:14:17.242 fused_ordering(874) 00:14:17.242 fused_ordering(875) 00:14:17.242 fused_ordering(876) 00:14:17.242 fused_ordering(877) 00:14:17.242 fused_ordering(878) 00:14:17.242 fused_ordering(879) 00:14:17.242 fused_ordering(880) 00:14:17.242 fused_ordering(881) 00:14:17.242 fused_ordering(882) 00:14:17.242 fused_ordering(883) 00:14:17.242 fused_ordering(884) 00:14:17.242 fused_ordering(885) 00:14:17.242 fused_ordering(886) 00:14:17.242 fused_ordering(887) 00:14:17.242 fused_ordering(888) 00:14:17.242 fused_ordering(889) 00:14:17.242 fused_ordering(890) 00:14:17.242 fused_ordering(891) 00:14:17.242 fused_ordering(892) 00:14:17.242 fused_ordering(893) 00:14:17.242 fused_ordering(894) 00:14:17.242 fused_ordering(895) 00:14:17.242 fused_ordering(896) 00:14:17.242 fused_ordering(897) 00:14:17.242 fused_ordering(898) 00:14:17.242 fused_ordering(899) 00:14:17.242 fused_ordering(900) 00:14:17.242 fused_ordering(901) 00:14:17.242 fused_ordering(902) 00:14:17.242 fused_ordering(903) 00:14:17.242 fused_ordering(904) 00:14:17.242 fused_ordering(905) 00:14:17.242 fused_ordering(906) 00:14:17.242 fused_ordering(907) 00:14:17.242 fused_ordering(908) 00:14:17.242 fused_ordering(909) 00:14:17.242 fused_ordering(910) 00:14:17.242 fused_ordering(911) 00:14:17.242 fused_ordering(912) 00:14:17.242 fused_ordering(913) 00:14:17.242 fused_ordering(914) 00:14:17.242 fused_ordering(915) 00:14:17.242 fused_ordering(916) 00:14:17.242 fused_ordering(917) 00:14:17.242 fused_ordering(918) 00:14:17.242 fused_ordering(919) 00:14:17.242 fused_ordering(920) 00:14:17.242 fused_ordering(921) 00:14:17.242 fused_ordering(922) 00:14:17.242 fused_ordering(923) 00:14:17.242 fused_ordering(924) 00:14:17.242 fused_ordering(925) 00:14:17.242 fused_ordering(926) 00:14:17.242 fused_ordering(927) 00:14:17.242 fused_ordering(928) 00:14:17.242 fused_ordering(929) 00:14:17.242 fused_ordering(930) 00:14:17.242 fused_ordering(931) 00:14:17.242 fused_ordering(932) 00:14:17.242 fused_ordering(933) 00:14:17.242 fused_ordering(934) 00:14:17.242 fused_ordering(935) 00:14:17.242 fused_ordering(936) 00:14:17.242 fused_ordering(937) 00:14:17.242 fused_ordering(938) 00:14:17.242 fused_ordering(939) 00:14:17.242 fused_ordering(940) 00:14:17.242 fused_ordering(941) 00:14:17.242 fused_ordering(942) 00:14:17.242 fused_ordering(943) 00:14:17.242 fused_ordering(944) 00:14:17.242 fused_ordering(945) 00:14:17.242 fused_ordering(946) 00:14:17.242 fused_ordering(947) 00:14:17.242 fused_ordering(948) 00:14:17.242 fused_ordering(949) 00:14:17.242 fused_ordering(950) 00:14:17.242 fused_ordering(951) 00:14:17.242 fused_ordering(952) 00:14:17.242 fused_ordering(953) 00:14:17.242 fused_ordering(954) 00:14:17.242 fused_ordering(955) 00:14:17.242 fused_ordering(956) 00:14:17.242 fused_ordering(957) 00:14:17.242 fused_ordering(958) 00:14:17.242 fused_ordering(959) 00:14:17.242 fused_ordering(960) 00:14:17.242 fused_ordering(961) 00:14:17.242 fused_ordering(962) 00:14:17.242 fused_ordering(963) 00:14:17.242 fused_ordering(964) 00:14:17.242 fused_ordering(965) 00:14:17.242 fused_ordering(966) 00:14:17.242 fused_ordering(967) 00:14:17.242 fused_ordering(968) 00:14:17.242 fused_ordering(969) 00:14:17.242 fused_ordering(970) 00:14:17.242 fused_ordering(971) 00:14:17.242 fused_ordering(972) 00:14:17.242 fused_ordering(973) 00:14:17.242 fused_ordering(974) 00:14:17.242 fused_ordering(975) 00:14:17.242 fused_ordering(976) 00:14:17.242 fused_ordering(977) 00:14:17.242 fused_ordering(978) 00:14:17.242 fused_ordering(979) 00:14:17.242 fused_ordering(980) 00:14:17.242 fused_ordering(981) 00:14:17.242 fused_ordering(982) 00:14:17.242 fused_ordering(983) 00:14:17.243 fused_ordering(984) 00:14:17.243 fused_ordering(985) 00:14:17.243 fused_ordering(986) 00:14:17.243 fused_ordering(987) 00:14:17.243 fused_ordering(988) 00:14:17.243 fused_ordering(989) 00:14:17.243 fused_ordering(990) 00:14:17.243 fused_ordering(991) 00:14:17.243 fused_ordering(992) 00:14:17.243 fused_ordering(993) 00:14:17.243 fused_ordering(994) 00:14:17.243 fused_ordering(995) 00:14:17.243 fused_ordering(996) 00:14:17.243 fused_ordering(997) 00:14:17.243 fused_ordering(998) 00:14:17.243 fused_ordering(999) 00:14:17.243 fused_ordering(1000) 00:14:17.243 fused_ordering(1001) 00:14:17.243 fused_ordering(1002) 00:14:17.243 fused_ordering(1003) 00:14:17.243 fused_ordering(1004) 00:14:17.243 fused_ordering(1005) 00:14:17.243 fused_ordering(1006) 00:14:17.243 fused_ordering(1007) 00:14:17.243 fused_ordering(1008) 00:14:17.243 fused_ordering(1009) 00:14:17.243 fused_ordering(1010) 00:14:17.243 fused_ordering(1011) 00:14:17.243 fused_ordering(1012) 00:14:17.243 fused_ordering(1013) 00:14:17.243 fused_ordering(1014) 00:14:17.243 fused_ordering(1015) 00:14:17.243 fused_ordering(1016) 00:14:17.243 fused_ordering(1017) 00:14:17.243 fused_ordering(1018) 00:14:17.243 fused_ordering(1019) 00:14:17.243 fused_ordering(1020) 00:14:17.243 fused_ordering(1021) 00:14:17.243 fused_ordering(1022) 00:14:17.243 fused_ordering(1023) 00:14:17.243 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:17.243 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:17.243 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:17.243 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:17.502 rmmod nvme_tcp 00:14:17.502 rmmod nvme_fabrics 00:14:17.502 rmmod nvme_keyring 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3453784 ']' 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3453784 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 3453784 ']' 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 3453784 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3453784 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3453784' 00:14:17.502 killing process with pid 3453784 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 3453784 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 3453784 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.502 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.046 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:20.046 00:14:20.046 real 0m13.485s 00:14:20.046 user 0m7.115s 00:14:20.046 sys 0m7.261s 00:14:20.046 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:20.046 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.046 ************************************ 00:14:20.046 END TEST nvmf_fused_ordering 00:14:20.046 ************************************ 00:14:20.046 07:12:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:20.046 07:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:20.046 07:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:20.046 07:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:20.046 ************************************ 00:14:20.046 START TEST nvmf_ns_masking 00:14:20.046 ************************************ 00:14:20.046 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:20.046 * Looking for test storage... 00:14:20.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:20.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.046 --rc genhtml_branch_coverage=1 00:14:20.046 --rc genhtml_function_coverage=1 00:14:20.046 --rc genhtml_legend=1 00:14:20.046 --rc geninfo_all_blocks=1 00:14:20.046 --rc geninfo_unexecuted_blocks=1 00:14:20.046 00:14:20.046 ' 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:20.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.046 --rc genhtml_branch_coverage=1 00:14:20.046 --rc genhtml_function_coverage=1 00:14:20.046 --rc genhtml_legend=1 00:14:20.046 --rc geninfo_all_blocks=1 00:14:20.046 --rc geninfo_unexecuted_blocks=1 00:14:20.046 00:14:20.046 ' 00:14:20.046 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:20.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.046 --rc genhtml_branch_coverage=1 00:14:20.046 --rc genhtml_function_coverage=1 00:14:20.046 --rc genhtml_legend=1 00:14:20.046 --rc geninfo_all_blocks=1 00:14:20.046 --rc geninfo_unexecuted_blocks=1 00:14:20.046 00:14:20.046 ' 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:20.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.047 --rc genhtml_branch_coverage=1 00:14:20.047 --rc genhtml_function_coverage=1 00:14:20.047 --rc genhtml_legend=1 00:14:20.047 --rc geninfo_all_blocks=1 00:14:20.047 --rc geninfo_unexecuted_blocks=1 00:14:20.047 00:14:20.047 ' 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:20.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2d94847e-60c2-45eb-8e26-ef4d063e8a23 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=dcb974ce-b258-4253-8bd3-e3efa7dca617 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2f8907e2-4f33-48c5-a3a7-38d253ef3c72 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:20.047 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:28.182 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:28.182 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:28.182 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:28.183 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:28.183 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:28.183 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:28.183 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:28.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:14:28.183 00:14:28.183 --- 10.0.0.2 ping statistics --- 00:14:28.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.183 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:28.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:14:28.183 00:14:28.183 --- 10.0.0.1 ping statistics --- 00:14:28.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.183 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.183 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3458800 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3458800 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3458800 ']' 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:28.184 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:28.184 [2024-11-20 07:12:49.775568] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:14:28.184 [2024-11-20 07:12:49.775638] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.184 [2024-11-20 07:12:49.873626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.184 [2024-11-20 07:12:49.923702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.184 [2024-11-20 07:12:49.923754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.184 [2024-11-20 07:12:49.923762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.184 [2024-11-20 07:12:49.923769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.184 [2024-11-20 07:12:49.923775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.184 [2024-11-20 07:12:49.924513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.443 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:28.443 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:28.443 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:28.443 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:28.443 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:28.443 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.443 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:28.702 [2024-11-20 07:12:50.786324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.702 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:28.702 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:28.702 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:28.962 Malloc1 00:14:28.962 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:28.962 Malloc2 00:14:28.962 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:29.223 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:29.484 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.484 [2024-11-20 07:12:51.758119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.743 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:29.743 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2f8907e2-4f33-48c5-a3a7-38d253ef3c72 -a 10.0.0.2 -s 4420 -i 4 00:14:29.743 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:29.743 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:29.743 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:29.743 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:29.743 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:31.654 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:31.654 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:31.654 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:31.914 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:31.914 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:31.914 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:31.914 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:31.914 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:31.914 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:31.914 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:31.914 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:31.914 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.914 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:31.914 [ 0]:0x1 00:14:31.914 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.914 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.914 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4710f751644545f4a49e7635654b59b6 00:14:31.914 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4710f751644545f4a49e7635654b59b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.914 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:32.175 [ 0]:0x1 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4710f751644545f4a49e7635654b59b6 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4710f751644545f4a49e7635654b59b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:32.175 [ 1]:0x2 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=358e913ff4e7478a887cd19e35d6cb83 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 358e913ff4e7478a887cd19e35d6cb83 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:32.175 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.435 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.435 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:32.695 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:32.695 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2f8907e2-4f33-48c5-a3a7-38d253ef3c72 -a 10.0.0.2 -s 4420 -i 4 00:14:32.955 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:32.955 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:32.955 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.955 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:14:32.955 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:14:32.955 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:34.866 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:34.866 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:34.867 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.867 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:34.867 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.867 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:34.867 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:34.867 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:35.128 [ 0]:0x2 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=358e913ff4e7478a887cd19e35d6cb83 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 358e913ff4e7478a887cd19e35d6cb83 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.128 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:35.389 [ 0]:0x1 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4710f751644545f4a49e7635654b59b6 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4710f751644545f4a49e7635654b59b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:35.389 [ 1]:0x2 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=358e913ff4e7478a887cd19e35d6cb83 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 358e913ff4e7478a887cd19e35d6cb83 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.389 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:35.651 [ 0]:0x2 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=358e913ff4e7478a887cd19e35d6cb83 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 358e913ff4e7478a887cd19e35d6cb83 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:35.651 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:35.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.652 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:35.912 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:35.912 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2f8907e2-4f33-48c5-a3a7-38d253ef3c72 -a 10.0.0.2 -s 4420 -i 4 00:14:36.172 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:36.172 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:36.172 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.172 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:36.172 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:36.172 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:38.093 [ 0]:0x1 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4710f751644545f4a49e7635654b59b6 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4710f751644545f4a49e7635654b59b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.093 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:38.093 [ 1]:0x2 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=358e913ff4e7478a887cd19e35d6cb83 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 358e913ff4e7478a887cd19e35d6cb83 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:38.354 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:38.614 [ 0]:0x2 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=358e913ff4e7478a887cd19e35d6cb83 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 358e913ff4e7478a887cd19e35d6cb83 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:38.614 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:38.875 [2024-11-20 07:13:00.923660] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:38.875 request: 00:14:38.875 { 00:14:38.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.875 "nsid": 2, 00:14:38.875 "host": "nqn.2016-06.io.spdk:host1", 00:14:38.875 "method": "nvmf_ns_remove_host", 00:14:38.875 "req_id": 1 00:14:38.875 } 00:14:38.875 Got JSON-RPC error response 00:14:38.875 response: 00:14:38.875 { 00:14:38.875 "code": -32602, 00:14:38.875 "message": "Invalid parameters" 00:14:38.875 } 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:38.875 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:38.875 [ 0]:0x2 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=358e913ff4e7478a887cd19e35d6cb83 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 358e913ff4e7478a887cd19e35d6cb83 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:38.875 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.135 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3461017 00:14:39.135 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.135 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:39.135 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3461017 /var/tmp/host.sock 00:14:39.135 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3461017 ']' 00:14:39.135 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:39.135 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:39.135 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:39.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:39.135 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:39.135 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:39.135 [2024-11-20 07:13:01.292490] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:14:39.135 [2024-11-20 07:13:01.292543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461017 ] 00:14:39.135 [2024-11-20 07:13:01.380012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.395 [2024-11-20 07:13:01.415977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.966 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:39.966 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:39.966 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.225 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:40.225 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2d94847e-60c2-45eb-8e26-ef4d063e8a23 00:14:40.225 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:40.225 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2D94847E60C245EB8E26EF4D063E8A23 -i 00:14:40.485 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid dcb974ce-b258-4253-8bd3-e3efa7dca617 00:14:40.485 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:40.485 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g DCB974CEB25842538BD3E3EFA7DCA617 -i 00:14:40.745 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:40.745 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:41.004 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:41.004 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:41.264 nvme0n1 00:14:41.264 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:41.264 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:41.523 nvme1n2 00:14:41.523 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:41.523 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:41.523 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:41.523 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:41.523 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:41.783 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:41.783 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:41.783 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:41.783 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:42.043 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2d94847e-60c2-45eb-8e26-ef4d063e8a23 == \2\d\9\4\8\4\7\e\-\6\0\c\2\-\4\5\e\b\-\8\e\2\6\-\e\f\4\d\0\6\3\e\8\a\2\3 ]] 00:14:42.043 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:42.043 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:42.043 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:42.043 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ dcb974ce-b258-4253-8bd3-e3efa7dca617 == \d\c\b\9\7\4\c\e\-\b\2\5\8\-\4\2\5\3\-\8\b\d\3\-\e\3\e\f\a\7\d\c\a\6\1\7 ]] 00:14:42.043 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.302 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:42.562 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 2d94847e-60c2-45eb-8e26-ef4d063e8a23 00:14:42.562 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:42.562 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2D94847E60C245EB8E26EF4D063E8A23 00:14:42.562 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:42.562 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2D94847E60C245EB8E26EF4D063E8A23 00:14:42.562 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.562 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.562 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.562 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.562 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.562 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.562 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.562 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:42.562 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2D94847E60C245EB8E26EF4D063E8A23 00:14:42.562 [2024-11-20 07:13:04.825876] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:42.562 [2024-11-20 07:13:04.825906] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:42.562 [2024-11-20 07:13:04.825913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.562 request: 00:14:42.562 { 00:14:42.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.562 "namespace": { 00:14:42.562 "bdev_name": "invalid", 00:14:42.562 "nsid": 1, 00:14:42.562 "nguid": "2D94847E60C245EB8E26EF4D063E8A23", 00:14:42.562 "no_auto_visible": false 00:14:42.562 }, 00:14:42.562 "method": "nvmf_subsystem_add_ns", 00:14:42.562 "req_id": 1 00:14:42.562 } 00:14:42.562 Got JSON-RPC error response 00:14:42.562 response: 00:14:42.562 { 00:14:42.562 "code": -32602, 00:14:42.562 "message": "Invalid parameters" 00:14:42.562 } 00:14:42.822 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:42.822 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:42.822 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:42.822 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:42.822 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 2d94847e-60c2-45eb-8e26-ef4d063e8a23 00:14:42.822 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:42.822 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2D94847E60C245EB8E26EF4D063E8A23 -i 00:14:42.822 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3461017 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3461017 ']' 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3461017 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3461017 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3461017' 00:14:45.366 killing process with pid 3461017 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3461017 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3461017 00:14:45.366 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:45.627 rmmod nvme_tcp 00:14:45.627 rmmod nvme_fabrics 00:14:45.627 rmmod nvme_keyring 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3458800 ']' 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3458800 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3458800 ']' 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3458800 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3458800 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3458800' 00:14:45.627 killing process with pid 3458800 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3458800 00:14:45.627 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3458800 00:14:45.888 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:45.888 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:45.888 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:45.888 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:45.889 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:45.889 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:45.889 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:45.889 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.889 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:45.889 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.889 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.889 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.801 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:47.802 00:14:47.802 real 0m28.104s 00:14:47.802 user 0m32.045s 00:14:47.802 sys 0m8.218s 00:14:47.802 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:47.802 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:47.802 ************************************ 00:14:47.802 END TEST nvmf_ns_masking 00:14:47.802 ************************************ 00:14:47.802 07:13:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:47.802 07:13:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:47.802 07:13:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:47.802 07:13:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:47.802 07:13:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.062 ************************************ 00:14:48.062 START TEST nvmf_nvme_cli 00:14:48.062 ************************************ 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:48.063 * Looking for test storage... 00:14:48.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:48.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.063 --rc genhtml_branch_coverage=1 00:14:48.063 --rc genhtml_function_coverage=1 00:14:48.063 --rc genhtml_legend=1 00:14:48.063 --rc geninfo_all_blocks=1 00:14:48.063 --rc geninfo_unexecuted_blocks=1 00:14:48.063 00:14:48.063 ' 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:48.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.063 --rc genhtml_branch_coverage=1 00:14:48.063 --rc genhtml_function_coverage=1 00:14:48.063 --rc genhtml_legend=1 00:14:48.063 --rc geninfo_all_blocks=1 00:14:48.063 --rc geninfo_unexecuted_blocks=1 00:14:48.063 00:14:48.063 ' 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:48.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.063 --rc genhtml_branch_coverage=1 00:14:48.063 --rc genhtml_function_coverage=1 00:14:48.063 --rc genhtml_legend=1 00:14:48.063 --rc geninfo_all_blocks=1 00:14:48.063 --rc geninfo_unexecuted_blocks=1 00:14:48.063 00:14:48.063 ' 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:48.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.063 --rc genhtml_branch_coverage=1 00:14:48.063 --rc genhtml_function_coverage=1 00:14:48.063 --rc genhtml_legend=1 00:14:48.063 --rc geninfo_all_blocks=1 00:14:48.063 --rc geninfo_unexecuted_blocks=1 00:14:48.063 00:14:48.063 ' 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.063 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.064 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.064 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.064 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.064 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:48.349 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.488 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.488 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:56.488 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:56.488 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:56.488 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:56.488 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:56.488 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:56.488 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:56.488 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:56.488 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:56.488 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:56.489 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:56.489 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:56.489 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:56.489 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:56.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:14:56.489 00:14:56.489 --- 10.0.0.2 ping statistics --- 00:14:56.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.489 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:14:56.489 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:14:56.489 00:14:56.490 --- 10.0.0.1 ping statistics --- 00:14:56.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.490 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3466696 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3466696 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 3466696 ']' 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:56.490 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.490 [2024-11-20 07:13:17.959437] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:14:56.490 [2024-11-20 07:13:17.959503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.490 [2024-11-20 07:13:18.057176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.490 [2024-11-20 07:13:18.111187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.490 [2024-11-20 07:13:18.111239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.490 [2024-11-20 07:13:18.111250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.490 [2024-11-20 07:13:18.111257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.490 [2024-11-20 07:13:18.111267] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.490 [2024-11-20 07:13:18.113241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.490 [2024-11-20 07:13:18.113380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.490 [2024-11-20 07:13:18.113542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.490 [2024-11-20 07:13:18.113542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.751 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:56.751 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:14:56.751 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:56.751 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.752 [2024-11-20 07:13:18.841310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.752 Malloc0 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.752 Malloc1 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.752 [2024-11-20 07:13:18.956603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.752 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:57.013 00:14:57.013 Discovery Log Number of Records 2, Generation counter 2 00:14:57.013 =====Discovery Log Entry 0====== 00:14:57.013 trtype: tcp 00:14:57.013 adrfam: ipv4 00:14:57.013 subtype: current discovery subsystem 00:14:57.013 treq: not required 00:14:57.013 portid: 0 00:14:57.013 trsvcid: 4420 00:14:57.013 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:57.013 traddr: 10.0.0.2 00:14:57.013 eflags: explicit discovery connections, duplicate discovery information 00:14:57.013 sectype: none 00:14:57.013 =====Discovery Log Entry 1====== 00:14:57.013 trtype: tcp 00:14:57.013 adrfam: ipv4 00:14:57.013 subtype: nvme subsystem 00:14:57.013 treq: not required 00:14:57.013 portid: 0 00:14:57.013 trsvcid: 4420 00:14:57.013 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:57.013 traddr: 10.0.0.2 00:14:57.013 eflags: none 00:14:57.013 sectype: none 00:14:57.013 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:57.013 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:57.013 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:57.013 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:57.013 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:57.013 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:57.013 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:57.013 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:57.013 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:57.013 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:57.013 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:58.927 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:58.927 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:14:58.927 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:58.927 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:58.927 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:58.927 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:00.840 /dev/nvme0n2 ]] 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:00.840 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:00.840 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:00.840 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:00.840 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:00.840 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:00.840 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:00.840 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:00.840 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:00.840 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:00.840 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:00.840 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:00.840 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:00.840 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:01.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:01.410 rmmod nvme_tcp 00:15:01.410 rmmod nvme_fabrics 00:15:01.410 rmmod nvme_keyring 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3466696 ']' 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3466696 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 3466696 ']' 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 3466696 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3466696 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3466696' 00:15:01.410 killing process with pid 3466696 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 3466696 00:15:01.410 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 3466696 00:15:01.671 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:01.671 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:01.671 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:01.671 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:01.671 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:01.671 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:01.671 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:01.671 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:01.671 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:01.671 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.671 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.671 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.585 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:03.585 00:15:03.585 real 0m15.696s 00:15:03.585 user 0m24.580s 00:15:03.585 sys 0m6.473s 00:15:03.585 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:03.585 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:03.585 ************************************ 00:15:03.585 END TEST nvmf_nvme_cli 00:15:03.585 ************************************ 00:15:03.585 07:13:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:03.585 07:13:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:03.585 07:13:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:03.585 07:13:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:03.585 07:13:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:03.846 ************************************ 00:15:03.846 START TEST nvmf_vfio_user 00:15:03.846 ************************************ 00:15:03.846 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:03.846 * Looking for test storage... 00:15:03.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:03.846 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:03.846 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:15:03.846 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:03.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.846 --rc genhtml_branch_coverage=1 00:15:03.846 --rc genhtml_function_coverage=1 00:15:03.846 --rc genhtml_legend=1 00:15:03.846 --rc geninfo_all_blocks=1 00:15:03.846 --rc geninfo_unexecuted_blocks=1 00:15:03.846 00:15:03.846 ' 00:15:03.846 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:03.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.846 --rc genhtml_branch_coverage=1 00:15:03.846 --rc genhtml_function_coverage=1 00:15:03.846 --rc genhtml_legend=1 00:15:03.846 --rc geninfo_all_blocks=1 00:15:03.846 --rc geninfo_unexecuted_blocks=1 00:15:03.846 00:15:03.846 ' 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:03.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.847 --rc genhtml_branch_coverage=1 00:15:03.847 --rc genhtml_function_coverage=1 00:15:03.847 --rc genhtml_legend=1 00:15:03.847 --rc geninfo_all_blocks=1 00:15:03.847 --rc geninfo_unexecuted_blocks=1 00:15:03.847 00:15:03.847 ' 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:03.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.847 --rc genhtml_branch_coverage=1 00:15:03.847 --rc genhtml_function_coverage=1 00:15:03.847 --rc genhtml_legend=1 00:15:03.847 --rc geninfo_all_blocks=1 00:15:03.847 --rc geninfo_unexecuted_blocks=1 00:15:03.847 00:15:03.847 ' 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:03.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:03.847 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:04.108 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:04.108 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:04.108 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:04.108 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3468347 00:15:04.108 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3468347' 00:15:04.108 Process pid: 3468347 00:15:04.108 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:04.108 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3468347 00:15:04.109 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:04.109 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3468347 ']' 00:15:04.109 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.109 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:04.109 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.109 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:04.109 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:04.109 [2024-11-20 07:13:26.178019] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:15:04.109 [2024-11-20 07:13:26.178098] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.109 [2024-11-20 07:13:26.265829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.109 [2024-11-20 07:13:26.300790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.109 [2024-11-20 07:13:26.300822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.109 [2024-11-20 07:13:26.300828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.109 [2024-11-20 07:13:26.300832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.109 [2024-11-20 07:13:26.300837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.109 [2024-11-20 07:13:26.302180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.109 [2024-11-20 07:13:26.302325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.109 [2024-11-20 07:13:26.302568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.109 [2024-11-20 07:13:26.302568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.050 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:05.050 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:15:05.050 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:05.991 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:05.991 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:05.991 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:05.991 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:05.991 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:05.991 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:06.252 Malloc1 00:15:06.252 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:06.511 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:06.511 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:06.772 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:06.772 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:06.772 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:07.032 Malloc2 00:15:07.032 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:07.293 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:07.293 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:07.561 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:07.561 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:07.561 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:07.561 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:07.561 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:07.561 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:07.561 [2024-11-20 07:13:29.712015] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:15:07.561 [2024-11-20 07:13:29.712054] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469116 ] 00:15:07.561 [2024-11-20 07:13:29.753390] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:07.561 [2024-11-20 07:13:29.763467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:07.561 [2024-11-20 07:13:29.763484] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3c0c043000 00:15:07.561 [2024-11-20 07:13:29.764461] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.561 [2024-11-20 07:13:29.765463] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.561 [2024-11-20 07:13:29.766466] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.561 [2024-11-20 07:13:29.767470] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:07.561 [2024-11-20 07:13:29.768485] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:07.561 [2024-11-20 07:13:29.769492] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.561 [2024-11-20 07:13:29.770495] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:07.561 [2024-11-20 07:13:29.771498] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.561 [2024-11-20 07:13:29.772510] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:07.561 [2024-11-20 07:13:29.772518] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3c0c038000 00:15:07.561 [2024-11-20 07:13:29.773433] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:07.561 [2024-11-20 07:13:29.786444] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:07.561 [2024-11-20 07:13:29.786466] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:07.561 [2024-11-20 07:13:29.791625] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:07.561 [2024-11-20 07:13:29.791660] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:07.561 [2024-11-20 07:13:29.791721] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:07.561 [2024-11-20 07:13:29.791733] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:07.561 [2024-11-20 07:13:29.791737] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:07.561 [2024-11-20 07:13:29.792626] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:07.561 [2024-11-20 07:13:29.792633] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:07.561 [2024-11-20 07:13:29.792638] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:07.561 [2024-11-20 07:13:29.793628] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:07.561 [2024-11-20 07:13:29.793634] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:07.561 [2024-11-20 07:13:29.793640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:07.561 [2024-11-20 07:13:29.794638] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:07.561 [2024-11-20 07:13:29.794645] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:07.561 [2024-11-20 07:13:29.795644] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:07.561 [2024-11-20 07:13:29.795651] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:07.561 [2024-11-20 07:13:29.795654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:07.561 [2024-11-20 07:13:29.795659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:07.561 [2024-11-20 07:13:29.795765] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:07.561 [2024-11-20 07:13:29.795769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:07.561 [2024-11-20 07:13:29.795773] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:07.561 [2024-11-20 07:13:29.796650] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:07.561 [2024-11-20 07:13:29.797650] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:07.561 [2024-11-20 07:13:29.798654] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:07.561 [2024-11-20 07:13:29.799652] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.561 [2024-11-20 07:13:29.799702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:07.561 [2024-11-20 07:13:29.800668] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:07.561 [2024-11-20 07:13:29.800674] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:07.561 [2024-11-20 07:13:29.800677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:07.561 [2024-11-20 07:13:29.800692] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:07.561 [2024-11-20 07:13:29.800698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:07.561 [2024-11-20 07:13:29.800708] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:07.561 [2024-11-20 07:13:29.800711] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.561 [2024-11-20 07:13:29.800714] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.561 [2024-11-20 07:13:29.800723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.561 [2024-11-20 07:13:29.800753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:07.561 [2024-11-20 07:13:29.800760] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:07.561 [2024-11-20 07:13:29.800764] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:07.561 [2024-11-20 07:13:29.800767] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:07.561 [2024-11-20 07:13:29.800771] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:07.561 [2024-11-20 07:13:29.800776] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:07.561 [2024-11-20 07:13:29.800779] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:07.561 [2024-11-20 07:13:29.800782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:07.561 [2024-11-20 07:13:29.800789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:07.561 [2024-11-20 07:13:29.800796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:07.561 [2024-11-20 07:13:29.800809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:07.561 [2024-11-20 07:13:29.800817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.561 [2024-11-20 07:13:29.800823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.561 [2024-11-20 07:13:29.800829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.561 [2024-11-20 07:13:29.800835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.561 [2024-11-20 07:13:29.800840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:07.561 [2024-11-20 07:13:29.800845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:07.561 [2024-11-20 07:13:29.800851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:07.561 [2024-11-20 07:13:29.800858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:07.561 [2024-11-20 07:13:29.800863] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:07.561 [2024-11-20 07:13:29.800867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:07.561 [2024-11-20 07:13:29.800871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:07.561 [2024-11-20 07:13:29.800875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:07.561 [2024-11-20 07:13:29.800882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:07.561 [2024-11-20 07:13:29.800893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:07.561 [2024-11-20 07:13:29.800936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:07.561 [2024-11-20 07:13:29.800942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:07.561 [2024-11-20 07:13:29.800947] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:07.561 [2024-11-20 07:13:29.800951] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:07.561 [2024-11-20 07:13:29.800953] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.561 [2024-11-20 07:13:29.800957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:07.561 [2024-11-20 07:13:29.800972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:07.561 [2024-11-20 07:13:29.800978] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:07.561 [2024-11-20 07:13:29.800986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:07.561 [2024-11-20 07:13:29.800992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:07.561 [2024-11-20 07:13:29.800997] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:07.561 [2024-11-20 07:13:29.801000] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.561 [2024-11-20 07:13:29.801002] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.561 [2024-11-20 07:13:29.801007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.561 [2024-11-20 07:13:29.801024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:07.561 [2024-11-20 07:13:29.801037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:07.562 [2024-11-20 07:13:29.801042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:07.562 [2024-11-20 07:13:29.801047] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:07.562 [2024-11-20 07:13:29.801050] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.562 [2024-11-20 07:13:29.801053] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.562 [2024-11-20 07:13:29.801057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.562 [2024-11-20 07:13:29.801067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:07.562 [2024-11-20 07:13:29.801072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:07.562 [2024-11-20 07:13:29.801077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:07.562 [2024-11-20 07:13:29.801082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:07.562 [2024-11-20 07:13:29.801087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:07.562 [2024-11-20 07:13:29.801090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:07.562 [2024-11-20 07:13:29.801094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:07.562 [2024-11-20 07:13:29.801097] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:07.562 [2024-11-20 07:13:29.801101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:07.562 [2024-11-20 07:13:29.801104] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:07.562 [2024-11-20 07:13:29.801118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:07.562 [2024-11-20 07:13:29.801126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:07.562 [2024-11-20 07:13:29.801135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:07.562 [2024-11-20 07:13:29.801143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:07.562 [2024-11-20 07:13:29.801151] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:07.562 [2024-11-20 07:13:29.801164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:07.562 [2024-11-20 07:13:29.801172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:07.562 [2024-11-20 07:13:29.801179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:07.562 [2024-11-20 07:13:29.801188] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:07.562 [2024-11-20 07:13:29.801193] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:07.562 [2024-11-20 07:13:29.801195] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:07.562 [2024-11-20 07:13:29.801198] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:07.562 [2024-11-20 07:13:29.801200] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:07.562 [2024-11-20 07:13:29.801205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:07.562 [2024-11-20 07:13:29.801210] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:07.562 [2024-11-20 07:13:29.801213] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:07.562 [2024-11-20 07:13:29.801216] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.562 [2024-11-20 07:13:29.801220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:07.562 [2024-11-20 07:13:29.801225] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:07.562 [2024-11-20 07:13:29.801228] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.562 [2024-11-20 07:13:29.801231] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.562 [2024-11-20 07:13:29.801235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.562 [2024-11-20 07:13:29.801241] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:07.562 [2024-11-20 07:13:29.801244] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:07.562 [2024-11-20 07:13:29.801246] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.562 [2024-11-20 07:13:29.801251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:07.562 [2024-11-20 07:13:29.801256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:07.562 [2024-11-20 07:13:29.801264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:07.562 [2024-11-20 07:13:29.801271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:07.562 [2024-11-20 07:13:29.801276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:07.562 ===================================================== 00:15:07.562 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:07.562 ===================================================== 00:15:07.562 Controller Capabilities/Features 00:15:07.562 ================================ 00:15:07.562 Vendor ID: 4e58 00:15:07.562 Subsystem Vendor ID: 4e58 00:15:07.562 Serial Number: SPDK1 00:15:07.562 Model Number: SPDK bdev Controller 00:15:07.562 Firmware Version: 25.01 00:15:07.562 Recommended Arb Burst: 6 00:15:07.562 IEEE OUI Identifier: 8d 6b 50 00:15:07.562 Multi-path I/O 00:15:07.562 May have multiple subsystem ports: Yes 00:15:07.562 May have multiple controllers: Yes 00:15:07.562 Associated with SR-IOV VF: No 00:15:07.562 Max Data Transfer Size: 131072 00:15:07.562 Max Number of Namespaces: 32 00:15:07.562 Max Number of I/O Queues: 127 00:15:07.562 NVMe Specification Version (VS): 1.3 00:15:07.562 NVMe Specification Version (Identify): 1.3 00:15:07.562 Maximum Queue Entries: 256 00:15:07.562 Contiguous Queues Required: Yes 00:15:07.562 Arbitration Mechanisms Supported 00:15:07.562 Weighted Round Robin: Not Supported 00:15:07.562 Vendor Specific: Not Supported 00:15:07.562 Reset Timeout: 15000 ms 00:15:07.562 Doorbell Stride: 4 bytes 00:15:07.562 NVM Subsystem Reset: Not Supported 00:15:07.562 Command Sets Supported 00:15:07.562 NVM Command Set: Supported 00:15:07.562 Boot Partition: Not Supported 00:15:07.562 Memory Page Size Minimum: 4096 bytes 00:15:07.562 Memory Page Size Maximum: 4096 bytes 00:15:07.562 Persistent Memory Region: Not Supported 00:15:07.562 Optional Asynchronous Events Supported 00:15:07.562 Namespace Attribute Notices: Supported 00:15:07.562 Firmware Activation Notices: Not Supported 00:15:07.562 ANA Change Notices: Not Supported 00:15:07.562 PLE Aggregate Log Change Notices: Not Supported 00:15:07.562 LBA Status Info Alert Notices: Not Supported 00:15:07.562 EGE Aggregate Log Change Notices: Not Supported 00:15:07.562 Normal NVM Subsystem Shutdown event: Not Supported 00:15:07.562 Zone Descriptor Change Notices: Not Supported 00:15:07.562 Discovery Log Change Notices: Not Supported 00:15:07.562 Controller Attributes 00:15:07.562 128-bit Host Identifier: Supported 00:15:07.562 Non-Operational Permissive Mode: Not Supported 00:15:07.562 NVM Sets: Not Supported 00:15:07.562 Read Recovery Levels: Not Supported 00:15:07.562 Endurance Groups: Not Supported 00:15:07.562 Predictable Latency Mode: Not Supported 00:15:07.562 Traffic Based Keep ALive: Not Supported 00:15:07.562 Namespace Granularity: Not Supported 00:15:07.562 SQ Associations: Not Supported 00:15:07.562 UUID List: Not Supported 00:15:07.562 Multi-Domain Subsystem: Not Supported 00:15:07.562 Fixed Capacity Management: Not Supported 00:15:07.562 Variable Capacity Management: Not Supported 00:15:07.562 Delete Endurance Group: Not Supported 00:15:07.562 Delete NVM Set: Not Supported 00:15:07.562 Extended LBA Formats Supported: Not Supported 00:15:07.562 Flexible Data Placement Supported: Not Supported 00:15:07.562 00:15:07.562 Controller Memory Buffer Support 00:15:07.562 ================================ 00:15:07.562 Supported: No 00:15:07.562 00:15:07.562 Persistent Memory Region Support 00:15:07.562 ================================ 00:15:07.562 Supported: No 00:15:07.562 00:15:07.562 Admin Command Set Attributes 00:15:07.562 ============================ 00:15:07.562 Security Send/Receive: Not Supported 00:15:07.562 Format NVM: Not Supported 00:15:07.562 Firmware Activate/Download: Not Supported 00:15:07.562 Namespace Management: Not Supported 00:15:07.562 Device Self-Test: Not Supported 00:15:07.562 Directives: Not Supported 00:15:07.562 NVMe-MI: Not Supported 00:15:07.562 Virtualization Management: Not Supported 00:15:07.562 Doorbell Buffer Config: Not Supported 00:15:07.562 Get LBA Status Capability: Not Supported 00:15:07.562 Command & Feature Lockdown Capability: Not Supported 00:15:07.562 Abort Command Limit: 4 00:15:07.562 Async Event Request Limit: 4 00:15:07.562 Number of Firmware Slots: N/A 00:15:07.562 Firmware Slot 1 Read-Only: N/A 00:15:07.562 Firmware Activation Without Reset: N/A 00:15:07.562 Multiple Update Detection Support: N/A 00:15:07.562 Firmware Update Granularity: No Information Provided 00:15:07.562 Per-Namespace SMART Log: No 00:15:07.562 Asymmetric Namespace Access Log Page: Not Supported 00:15:07.562 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:07.562 Command Effects Log Page: Supported 00:15:07.562 Get Log Page Extended Data: Supported 00:15:07.562 Telemetry Log Pages: Not Supported 00:15:07.562 Persistent Event Log Pages: Not Supported 00:15:07.562 Supported Log Pages Log Page: May Support 00:15:07.562 Commands Supported & Effects Log Page: Not Supported 00:15:07.562 Feature Identifiers & Effects Log Page:May Support 00:15:07.562 NVMe-MI Commands & Effects Log Page: May Support 00:15:07.562 Data Area 4 for Telemetry Log: Not Supported 00:15:07.562 Error Log Page Entries Supported: 128 00:15:07.562 Keep Alive: Supported 00:15:07.562 Keep Alive Granularity: 10000 ms 00:15:07.562 00:15:07.562 NVM Command Set Attributes 00:15:07.562 ========================== 00:15:07.562 Submission Queue Entry Size 00:15:07.562 Max: 64 00:15:07.562 Min: 64 00:15:07.562 Completion Queue Entry Size 00:15:07.562 Max: 16 00:15:07.562 Min: 16 00:15:07.562 Number of Namespaces: 32 00:15:07.562 Compare Command: Supported 00:15:07.562 Write Uncorrectable Command: Not Supported 00:15:07.562 Dataset Management Command: Supported 00:15:07.562 Write Zeroes Command: Supported 00:15:07.562 Set Features Save Field: Not Supported 00:15:07.562 Reservations: Not Supported 00:15:07.562 Timestamp: Not Supported 00:15:07.562 Copy: Supported 00:15:07.562 Volatile Write Cache: Present 00:15:07.562 Atomic Write Unit (Normal): 1 00:15:07.562 Atomic Write Unit (PFail): 1 00:15:07.562 Atomic Compare & Write Unit: 1 00:15:07.562 Fused Compare & Write: Supported 00:15:07.562 Scatter-Gather List 00:15:07.562 SGL Command Set: Supported (Dword aligned) 00:15:07.562 SGL Keyed: Not Supported 00:15:07.562 SGL Bit Bucket Descriptor: Not Supported 00:15:07.562 SGL Metadata Pointer: Not Supported 00:15:07.562 Oversized SGL: Not Supported 00:15:07.562 SGL Metadata Address: Not Supported 00:15:07.562 SGL Offset: Not Supported 00:15:07.562 Transport SGL Data Block: Not Supported 00:15:07.562 Replay Protected Memory Block: Not Supported 00:15:07.562 00:15:07.562 Firmware Slot Information 00:15:07.562 ========================= 00:15:07.562 Active slot: 1 00:15:07.562 Slot 1 Firmware Revision: 25.01 00:15:07.562 00:15:07.562 00:15:07.562 Commands Supported and Effects 00:15:07.562 ============================== 00:15:07.562 Admin Commands 00:15:07.562 -------------- 00:15:07.562 Get Log Page (02h): Supported 00:15:07.562 Identify (06h): Supported 00:15:07.562 Abort (08h): Supported 00:15:07.562 Set Features (09h): Supported 00:15:07.562 Get Features (0Ah): Supported 00:15:07.562 Asynchronous Event Request (0Ch): Supported 00:15:07.562 Keep Alive (18h): Supported 00:15:07.562 I/O Commands 00:15:07.562 ------------ 00:15:07.562 Flush (00h): Supported LBA-Change 00:15:07.562 Write (01h): Supported LBA-Change 00:15:07.562 Read (02h): Supported 00:15:07.562 Compare (05h): Supported 00:15:07.562 Write Zeroes (08h): Supported LBA-Change 00:15:07.562 Dataset Management (09h): Supported LBA-Change 00:15:07.562 Copy (19h): Supported LBA-Change 00:15:07.562 00:15:07.562 Error Log 00:15:07.562 ========= 00:15:07.562 00:15:07.562 Arbitration 00:15:07.562 =========== 00:15:07.562 Arbitration Burst: 1 00:15:07.562 00:15:07.562 Power Management 00:15:07.562 ================ 00:15:07.562 Number of Power States: 1 00:15:07.562 Current Power State: Power State #0 00:15:07.562 Power State #0: 00:15:07.562 Max Power: 0.00 W 00:15:07.562 Non-Operational State: Operational 00:15:07.562 Entry Latency: Not Reported 00:15:07.562 Exit Latency: Not Reported 00:15:07.562 Relative Read Throughput: 0 00:15:07.562 Relative Read Latency: 0 00:15:07.562 Relative Write Throughput: 0 00:15:07.562 Relative Write Latency: 0 00:15:07.562 Idle Power: Not Reported 00:15:07.562 Active Power: Not Reported 00:15:07.562 Non-Operational Permissive Mode: Not Supported 00:15:07.562 00:15:07.563 Health Information 00:15:07.563 ================== 00:15:07.563 Critical Warnings: 00:15:07.563 Available Spare Space: OK 00:15:07.563 Temperature: OK 00:15:07.563 Device Reliability: OK 00:15:07.563 Read Only: No 00:15:07.563 Volatile Memory Backup: OK 00:15:07.563 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:07.563 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:07.563 Available Spare: 0% 00:15:07.563 Available Sp[2024-11-20 07:13:29.801353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:07.563 [2024-11-20 07:13:29.801359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:07.563 [2024-11-20 07:13:29.801380] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:07.563 [2024-11-20 07:13:29.801387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.563 [2024-11-20 07:13:29.801391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.563 [2024-11-20 07:13:29.801396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.563 [2024-11-20 07:13:29.801400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.563 [2024-11-20 07:13:29.801672] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:07.563 [2024-11-20 07:13:29.801682] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:07.563 [2024-11-20 07:13:29.802674] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.563 [2024-11-20 07:13:29.802714] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:07.563 [2024-11-20 07:13:29.802719] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:07.563 [2024-11-20 07:13:29.803676] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:07.563 [2024-11-20 07:13:29.803685] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:07.563 [2024-11-20 07:13:29.803734] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:07.563 [2024-11-20 07:13:29.804699] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:07.563 are Threshold: 0% 00:15:07.563 Life Percentage Used: 0% 00:15:07.563 Data Units Read: 0 00:15:07.563 Data Units Written: 0 00:15:07.563 Host Read Commands: 0 00:15:07.563 Host Write Commands: 0 00:15:07.563 Controller Busy Time: 0 minutes 00:15:07.563 Power Cycles: 0 00:15:07.563 Power On Hours: 0 hours 00:15:07.563 Unsafe Shutdowns: 0 00:15:07.563 Unrecoverable Media Errors: 0 00:15:07.563 Lifetime Error Log Entries: 0 00:15:07.563 Warning Temperature Time: 0 minutes 00:15:07.563 Critical Temperature Time: 0 minutes 00:15:07.563 00:15:07.563 Number of Queues 00:15:07.563 ================ 00:15:07.563 Number of I/O Submission Queues: 127 00:15:07.563 Number of I/O Completion Queues: 127 00:15:07.563 00:15:07.563 Active Namespaces 00:15:07.563 ================= 00:15:07.563 Namespace ID:1 00:15:07.563 Error Recovery Timeout: Unlimited 00:15:07.563 Command Set Identifier: NVM (00h) 00:15:07.563 Deallocate: Supported 00:15:07.563 Deallocated/Unwritten Error: Not Supported 00:15:07.563 Deallocated Read Value: Unknown 00:15:07.563 Deallocate in Write Zeroes: Not Supported 00:15:07.563 Deallocated Guard Field: 0xFFFF 00:15:07.563 Flush: Supported 00:15:07.563 Reservation: Supported 00:15:07.563 Namespace Sharing Capabilities: Multiple Controllers 00:15:07.563 Size (in LBAs): 131072 (0GiB) 00:15:07.563 Capacity (in LBAs): 131072 (0GiB) 00:15:07.563 Utilization (in LBAs): 131072 (0GiB) 00:15:07.563 NGUID: C0B78BCF1F0646489663FAFD96EC2591 00:15:07.563 UUID: c0b78bcf-1f06-4648-9663-fafd96ec2591 00:15:07.563 Thin Provisioning: Not Supported 00:15:07.563 Per-NS Atomic Units: Yes 00:15:07.563 Atomic Boundary Size (Normal): 0 00:15:07.563 Atomic Boundary Size (PFail): 0 00:15:07.563 Atomic Boundary Offset: 0 00:15:07.563 Maximum Single Source Range Length: 65535 00:15:07.563 Maximum Copy Length: 65535 00:15:07.563 Maximum Source Range Count: 1 00:15:07.563 NGUID/EUI64 Never Reused: No 00:15:07.563 Namespace Write Protected: No 00:15:07.563 Number of LBA Formats: 1 00:15:07.563 Current LBA Format: LBA Format #00 00:15:07.563 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:07.563 00:15:07.907 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:07.907 [2024-11-20 07:13:29.992844] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.239 Initializing NVMe Controllers 00:15:13.239 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:13.239 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:13.239 Initialization complete. Launching workers. 00:15:13.239 ======================================================== 00:15:13.239 Latency(us) 00:15:13.239 Device Information : IOPS MiB/s Average min max 00:15:13.239 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40070.60 156.53 3197.03 850.95 7770.62 00:15:13.239 ======================================================== 00:15:13.239 Total : 40070.60 156.53 3197.03 850.95 7770.62 00:15:13.239 00:15:13.239 [2024-11-20 07:13:35.013687] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.239 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:13.239 [2024-11-20 07:13:35.206552] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:18.519 Initializing NVMe Controllers 00:15:18.519 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:18.519 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:18.519 Initialization complete. Launching workers. 00:15:18.519 ======================================================== 00:15:18.519 Latency(us) 00:15:18.519 Device Information : IOPS MiB/s Average min max 00:15:18.519 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7985.49 4987.90 14959.61 00:15:18.519 ======================================================== 00:15:18.519 Total : 16051.20 62.70 7985.49 4987.90 14959.61 00:15:18.519 00:15:18.519 [2024-11-20 07:13:40.246814] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:18.519 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:18.519 [2024-11-20 07:13:40.452668] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:23.798 [2024-11-20 07:13:45.538404] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:23.798 Initializing NVMe Controllers 00:15:23.798 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:23.798 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:23.798 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:23.798 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:23.798 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:23.798 Initialization complete. Launching workers. 00:15:23.798 Starting thread on core 2 00:15:23.798 Starting thread on core 3 00:15:23.798 Starting thread on core 1 00:15:23.798 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:23.798 [2024-11-20 07:13:45.782373] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.098 [2024-11-20 07:13:48.830555] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.099 Initializing NVMe Controllers 00:15:27.099 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:27.099 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:27.099 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:27.099 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:27.099 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:27.099 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:27.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:27.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:27.099 Initialization complete. Launching workers. 00:15:27.099 Starting thread on core 1 with urgent priority queue 00:15:27.099 Starting thread on core 2 with urgent priority queue 00:15:27.099 Starting thread on core 3 with urgent priority queue 00:15:27.099 Starting thread on core 0 with urgent priority queue 00:15:27.099 SPDK bdev Controller (SPDK1 ) core 0: 14392.00 IO/s 6.95 secs/100000 ios 00:15:27.099 SPDK bdev Controller (SPDK1 ) core 1: 13165.33 IO/s 7.60 secs/100000 ios 00:15:27.099 SPDK bdev Controller (SPDK1 ) core 2: 13356.67 IO/s 7.49 secs/100000 ios 00:15:27.099 SPDK bdev Controller (SPDK1 ) core 3: 14699.00 IO/s 6.80 secs/100000 ios 00:15:27.099 ======================================================== 00:15:27.099 00:15:27.099 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:27.099 [2024-11-20 07:13:49.073423] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.099 Initializing NVMe Controllers 00:15:27.100 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:27.100 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:27.100 Namespace ID: 1 size: 0GB 00:15:27.100 Initialization complete. 00:15:27.100 INFO: using host memory buffer for IO 00:15:27.100 Hello world! 00:15:27.100 [2024-11-20 07:13:49.107606] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.100 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:27.100 [2024-11-20 07:13:49.342212] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:28.484 Initializing NVMe Controllers 00:15:28.484 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.484 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.484 Initialization complete. Launching workers. 00:15:28.484 submit (in ns) avg, min, max = 7019.8, 2821.7, 4004902.5 00:15:28.484 complete (in ns) avg, min, max = 16198.6, 1639.2, 4005943.3 00:15:28.484 00:15:28.484 Submit histogram 00:15:28.484 ================ 00:15:28.484 Range in us Cumulative Count 00:15:28.484 2.813 - 2.827: 0.1111% ( 23) 00:15:28.484 2.827 - 2.840: 1.0866% ( 202) 00:15:28.484 2.840 - 2.853: 3.3081% ( 460) 00:15:28.484 2.853 - 2.867: 7.0121% ( 767) 00:15:28.484 2.867 - 2.880: 11.9042% ( 1013) 00:15:28.484 2.880 - 2.893: 17.7380% ( 1208) 00:15:28.484 2.893 - 2.907: 24.0643% ( 1310) 00:15:28.484 2.907 - 2.920: 29.9222% ( 1213) 00:15:28.484 2.920 - 2.933: 36.1327% ( 1286) 00:15:28.484 2.933 - 2.947: 42.2369% ( 1264) 00:15:28.484 2.947 - 2.960: 47.7906% ( 1150) 00:15:28.484 2.960 - 2.973: 53.4747% ( 1177) 00:15:28.484 2.973 - 2.987: 61.1243% ( 1584) 00:15:28.484 2.987 - 3.000: 70.2468% ( 1889) 00:15:28.484 3.000 - 3.013: 79.1182% ( 1837) 00:15:28.484 3.013 - 3.027: 84.8795% ( 1193) 00:15:28.484 3.027 - 3.040: 90.3946% ( 1142) 00:15:28.484 3.040 - 3.053: 94.4705% ( 844) 00:15:28.484 3.053 - 3.067: 96.9576% ( 515) 00:15:28.484 3.067 - 3.080: 98.2566% ( 269) 00:15:28.484 3.080 - 3.093: 98.9086% ( 135) 00:15:28.484 3.093 - 3.107: 99.2177% ( 64) 00:15:28.484 3.107 - 3.120: 99.3770% ( 33) 00:15:28.484 3.120 - 3.133: 99.4688% ( 19) 00:15:28.484 3.133 - 3.147: 99.5364% ( 14) 00:15:28.484 3.147 - 3.160: 99.5460% ( 2) 00:15:28.484 3.160 - 3.173: 99.5509% ( 1) 00:15:28.484 3.173 - 3.187: 99.5557% ( 1) 00:15:28.484 3.200 - 3.213: 99.5605% ( 1) 00:15:28.484 3.400 - 3.413: 99.5702% ( 2) 00:15:28.484 3.733 - 3.760: 99.5750% ( 1) 00:15:28.484 4.133 - 4.160: 99.5799% ( 1) 00:15:28.484 4.293 - 4.320: 99.5847% ( 1) 00:15:28.484 4.373 - 4.400: 99.5895% ( 1) 00:15:28.484 4.507 - 4.533: 99.5943% ( 1) 00:15:28.484 4.587 - 4.613: 99.6040% ( 2) 00:15:28.484 4.667 - 4.693: 99.6088% ( 1) 00:15:28.484 4.693 - 4.720: 99.6185% ( 2) 00:15:28.484 4.720 - 4.747: 99.6330% ( 3) 00:15:28.484 4.747 - 4.773: 99.6378% ( 1) 00:15:28.484 4.800 - 4.827: 99.6523% ( 3) 00:15:28.484 4.827 - 4.853: 99.6571% ( 1) 00:15:28.484 4.853 - 4.880: 99.6668% ( 2) 00:15:28.484 4.880 - 4.907: 99.6764% ( 2) 00:15:28.484 4.907 - 4.933: 99.6861% ( 2) 00:15:28.484 4.960 - 4.987: 99.6909% ( 1) 00:15:28.484 4.987 - 5.013: 99.7054% ( 3) 00:15:28.484 5.013 - 5.040: 99.7102% ( 1) 00:15:28.484 5.040 - 5.067: 99.7296% ( 4) 00:15:28.484 5.173 - 5.200: 99.7344% ( 1) 00:15:28.484 5.227 - 5.253: 99.7392% ( 1) 00:15:28.484 5.360 - 5.387: 99.7440% ( 1) 00:15:28.484 5.547 - 5.573: 99.7489% ( 1) 00:15:28.484 5.600 - 5.627: 99.7537% ( 1) 00:15:28.484 5.680 - 5.707: 99.7585% ( 1) 00:15:28.484 5.707 - 5.733: 99.7634% ( 1) 00:15:28.484 5.733 - 5.760: 99.7682% ( 1) 00:15:28.484 5.760 - 5.787: 99.7730% ( 1) 00:15:28.484 5.787 - 5.813: 99.7779% ( 1) 00:15:28.484 6.053 - 6.080: 99.7827% ( 1) 00:15:28.484 6.213 - 6.240: 99.7875% ( 1) 00:15:28.484 6.293 - 6.320: 99.7923% ( 1) 00:15:28.484 6.320 - 6.347: 99.7972% ( 1) 00:15:28.484 6.427 - 6.453: 99.8020% ( 1) 00:15:28.484 6.533 - 6.560: 99.8165% ( 3) 00:15:28.484 6.693 - 6.720: 99.8261% ( 2) 00:15:28.484 6.880 - 6.933: 99.8358% ( 2) 00:15:28.484 6.987 - 7.040: 99.8406% ( 1) 00:15:28.484 7.040 - 7.093: 99.8503% ( 2) 00:15:28.484 7.093 - 7.147: 99.8600% ( 2) 00:15:28.484 7.200 - 7.253: 99.8696% ( 2) 00:15:28.484 7.253 - 7.307: 99.8744% ( 1) 00:15:28.484 7.307 - 7.360: 99.8793% ( 1) 00:15:28.484 7.680 - 7.733: 99.8841% ( 1) 00:15:28.484 8.213 - 8.267: 99.8889% ( 1) 00:15:28.484 8.533 - 8.587: 99.8938% ( 1) 00:15:28.484 9.813 - 9.867: 99.8986% ( 1) 00:15:28.484 3986.773 - 4014.080: 100.0000% ( 21) 00:15:28.484 00:15:28.484 Complete histogram 00:15:28.484 ================== 00:15:28.484 Ra[2024-11-20 07:13:50.363923] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:28.484 nge in us Cumulative Count 00:15:28.484 1.633 - 1.640: 0.0193% ( 4) 00:15:28.484 1.640 - 1.647: 0.7534% ( 152) 00:15:28.484 1.647 - 1.653: 0.8451% ( 19) 00:15:28.484 1.653 - 1.660: 0.9224% ( 16) 00:15:28.484 1.660 - 1.667: 1.0624% ( 29) 00:15:28.484 1.667 - 1.673: 1.0914% ( 6) 00:15:28.484 1.673 - 1.680: 1.1107% ( 4) 00:15:28.484 1.680 - 1.687: 1.1252% ( 3) 00:15:28.484 1.687 - 1.693: 1.1301% ( 1) 00:15:28.484 1.693 - 1.700: 1.2170% ( 18) 00:15:28.484 1.700 - 1.707: 28.9950% ( 5752) 00:15:28.484 1.707 - 1.720: 55.5947% ( 5508) 00:15:28.484 1.720 - 1.733: 73.7432% ( 3758) 00:15:28.484 1.733 - 1.747: 81.5956% ( 1626) 00:15:28.484 1.747 - 1.760: 83.1941% ( 331) 00:15:28.484 1.760 - 1.773: 87.9413% ( 983) 00:15:28.484 1.773 - 1.787: 93.4563% ( 1142) 00:15:28.484 1.787 - 1.800: 97.1652% ( 768) 00:15:28.484 1.800 - 1.813: 98.8023% ( 339) 00:15:28.484 1.813 - 1.827: 99.3674% ( 117) 00:15:28.484 1.827 - 1.840: 99.4591% ( 19) 00:15:28.484 1.840 - 1.853: 99.4639% ( 1) 00:15:28.484 1.853 - 1.867: 99.4688% ( 1) 00:15:28.484 1.880 - 1.893: 99.4736% ( 1) 00:15:28.484 1.960 - 1.973: 99.4784% ( 1) 00:15:28.484 3.293 - 3.307: 99.4833% ( 1) 00:15:28.484 3.307 - 3.320: 99.4881% ( 1) 00:15:28.484 3.333 - 3.347: 99.4929% ( 1) 00:15:28.485 3.387 - 3.400: 99.4978% ( 1) 00:15:28.485 3.413 - 3.440: 99.5026% ( 1) 00:15:28.485 3.467 - 3.493: 99.5122% ( 2) 00:15:28.485 3.627 - 3.653: 99.5171% ( 1) 00:15:28.485 3.787 - 3.813: 99.5219% ( 1) 00:15:28.485 3.867 - 3.893: 99.5316% ( 2) 00:15:28.485 3.947 - 3.973: 99.5364% ( 1) 00:15:28.485 4.027 - 4.053: 99.5412% ( 1) 00:15:28.485 4.560 - 4.587: 99.5460% ( 1) 00:15:28.485 4.693 - 4.720: 99.5509% ( 1) 00:15:28.485 4.747 - 4.773: 99.5557% ( 1) 00:15:28.485 4.880 - 4.907: 99.5605% ( 1) 00:15:28.485 4.960 - 4.987: 99.5654% ( 1) 00:15:28.485 5.013 - 5.040: 99.5702% ( 1) 00:15:28.485 5.040 - 5.067: 99.5750% ( 1) 00:15:28.485 5.147 - 5.173: 99.5799% ( 1) 00:15:28.485 5.173 - 5.200: 99.5847% ( 1) 00:15:28.485 5.200 - 5.227: 99.5895% ( 1) 00:15:28.485 5.307 - 5.333: 99.5943% ( 1) 00:15:28.485 5.333 - 5.360: 99.5992% ( 1) 00:15:28.485 5.387 - 5.413: 99.6088% ( 2) 00:15:28.485 5.573 - 5.600: 99.6137% ( 1) 00:15:28.485 5.707 - 5.733: 99.6185% ( 1) 00:15:28.485 6.027 - 6.053: 99.6233% ( 1) 00:15:28.485 6.187 - 6.213: 99.6281% ( 1) 00:15:28.485 7.093 - 7.147: 99.6330% ( 1) 00:15:28.485 7.147 - 7.200: 99.6378% ( 1) 00:15:28.485 3986.773 - 4014.080: 100.0000% ( 75) 00:15:28.485 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:28.485 [ 00:15:28.485 { 00:15:28.485 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:28.485 "subtype": "Discovery", 00:15:28.485 "listen_addresses": [], 00:15:28.485 "allow_any_host": true, 00:15:28.485 "hosts": [] 00:15:28.485 }, 00:15:28.485 { 00:15:28.485 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:28.485 "subtype": "NVMe", 00:15:28.485 "listen_addresses": [ 00:15:28.485 { 00:15:28.485 "trtype": "VFIOUSER", 00:15:28.485 "adrfam": "IPv4", 00:15:28.485 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:28.485 "trsvcid": "0" 00:15:28.485 } 00:15:28.485 ], 00:15:28.485 "allow_any_host": true, 00:15:28.485 "hosts": [], 00:15:28.485 "serial_number": "SPDK1", 00:15:28.485 "model_number": "SPDK bdev Controller", 00:15:28.485 "max_namespaces": 32, 00:15:28.485 "min_cntlid": 1, 00:15:28.485 "max_cntlid": 65519, 00:15:28.485 "namespaces": [ 00:15:28.485 { 00:15:28.485 "nsid": 1, 00:15:28.485 "bdev_name": "Malloc1", 00:15:28.485 "name": "Malloc1", 00:15:28.485 "nguid": "C0B78BCF1F0646489663FAFD96EC2591", 00:15:28.485 "uuid": "c0b78bcf-1f06-4648-9663-fafd96ec2591" 00:15:28.485 } 00:15:28.485 ] 00:15:28.485 }, 00:15:28.485 { 00:15:28.485 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:28.485 "subtype": "NVMe", 00:15:28.485 "listen_addresses": [ 00:15:28.485 { 00:15:28.485 "trtype": "VFIOUSER", 00:15:28.485 "adrfam": "IPv4", 00:15:28.485 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:28.485 "trsvcid": "0" 00:15:28.485 } 00:15:28.485 ], 00:15:28.485 "allow_any_host": true, 00:15:28.485 "hosts": [], 00:15:28.485 "serial_number": "SPDK2", 00:15:28.485 "model_number": "SPDK bdev Controller", 00:15:28.485 "max_namespaces": 32, 00:15:28.485 "min_cntlid": 1, 00:15:28.485 "max_cntlid": 65519, 00:15:28.485 "namespaces": [ 00:15:28.485 { 00:15:28.485 "nsid": 1, 00:15:28.485 "bdev_name": "Malloc2", 00:15:28.485 "name": "Malloc2", 00:15:28.485 "nguid": "C84E5ECEB943408A95DC2D5D9675B4E8", 00:15:28.485 "uuid": "c84e5ece-b943-408a-95dc-2d5d9675b4e8" 00:15:28.485 } 00:15:28.485 ] 00:15:28.485 } 00:15:28.485 ] 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3473252 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:28.485 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:28.485 [2024-11-20 07:13:50.751609] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:28.746 Malloc3 00:15:28.746 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:28.746 [2024-11-20 07:13:50.939892] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:28.746 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:28.746 Asynchronous Event Request test 00:15:28.746 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.746 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.746 Registering asynchronous event callbacks... 00:15:28.746 Starting namespace attribute notice tests for all controllers... 00:15:28.746 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:28.746 aer_cb - Changed Namespace 00:15:28.746 Cleaning up... 00:15:29.006 [ 00:15:29.006 { 00:15:29.006 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:29.006 "subtype": "Discovery", 00:15:29.006 "listen_addresses": [], 00:15:29.006 "allow_any_host": true, 00:15:29.006 "hosts": [] 00:15:29.006 }, 00:15:29.006 { 00:15:29.006 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:29.006 "subtype": "NVMe", 00:15:29.006 "listen_addresses": [ 00:15:29.006 { 00:15:29.006 "trtype": "VFIOUSER", 00:15:29.006 "adrfam": "IPv4", 00:15:29.006 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:29.006 "trsvcid": "0" 00:15:29.006 } 00:15:29.006 ], 00:15:29.006 "allow_any_host": true, 00:15:29.006 "hosts": [], 00:15:29.006 "serial_number": "SPDK1", 00:15:29.006 "model_number": "SPDK bdev Controller", 00:15:29.006 "max_namespaces": 32, 00:15:29.006 "min_cntlid": 1, 00:15:29.006 "max_cntlid": 65519, 00:15:29.006 "namespaces": [ 00:15:29.006 { 00:15:29.006 "nsid": 1, 00:15:29.006 "bdev_name": "Malloc1", 00:15:29.006 "name": "Malloc1", 00:15:29.006 "nguid": "C0B78BCF1F0646489663FAFD96EC2591", 00:15:29.007 "uuid": "c0b78bcf-1f06-4648-9663-fafd96ec2591" 00:15:29.007 }, 00:15:29.007 { 00:15:29.007 "nsid": 2, 00:15:29.007 "bdev_name": "Malloc3", 00:15:29.007 "name": "Malloc3", 00:15:29.007 "nguid": "677735EBB0F14D3A972BB56821244E73", 00:15:29.007 "uuid": "677735eb-b0f1-4d3a-972b-b56821244e73" 00:15:29.007 } 00:15:29.007 ] 00:15:29.007 }, 00:15:29.007 { 00:15:29.007 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:29.007 "subtype": "NVMe", 00:15:29.007 "listen_addresses": [ 00:15:29.007 { 00:15:29.007 "trtype": "VFIOUSER", 00:15:29.007 "adrfam": "IPv4", 00:15:29.007 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:29.007 "trsvcid": "0" 00:15:29.007 } 00:15:29.007 ], 00:15:29.007 "allow_any_host": true, 00:15:29.007 "hosts": [], 00:15:29.007 "serial_number": "SPDK2", 00:15:29.007 "model_number": "SPDK bdev Controller", 00:15:29.007 "max_namespaces": 32, 00:15:29.007 "min_cntlid": 1, 00:15:29.007 "max_cntlid": 65519, 00:15:29.007 "namespaces": [ 00:15:29.007 { 00:15:29.007 "nsid": 1, 00:15:29.007 "bdev_name": "Malloc2", 00:15:29.007 "name": "Malloc2", 00:15:29.007 "nguid": "C84E5ECEB943408A95DC2D5D9675B4E8", 00:15:29.007 "uuid": "c84e5ece-b943-408a-95dc-2d5d9675b4e8" 00:15:29.007 } 00:15:29.007 ] 00:15:29.007 } 00:15:29.007 ] 00:15:29.007 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3473252 00:15:29.007 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:29.007 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:29.007 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:29.007 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:29.007 [2024-11-20 07:13:51.166043] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:15:29.007 [2024-11-20 07:13:51.166088] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473271 ] 00:15:29.007 [2024-11-20 07:13:51.206380] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:29.007 [2024-11-20 07:13:51.211313] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:29.007 [2024-11-20 07:13:51.211332] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8461765000 00:15:29.007 [2024-11-20 07:13:51.212317] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.007 [2024-11-20 07:13:51.213325] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.007 [2024-11-20 07:13:51.214331] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.007 [2024-11-20 07:13:51.215336] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:29.007 [2024-11-20 07:13:51.216345] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:29.007 [2024-11-20 07:13:51.217356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.007 [2024-11-20 07:13:51.218360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:29.007 [2024-11-20 07:13:51.219368] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.007 [2024-11-20 07:13:51.220374] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:29.007 [2024-11-20 07:13:51.220382] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f846175a000 00:15:29.007 [2024-11-20 07:13:51.221291] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:29.007 [2024-11-20 07:13:51.235434] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:29.007 [2024-11-20 07:13:51.235451] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:29.007 [2024-11-20 07:13:51.237502] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:29.007 [2024-11-20 07:13:51.237536] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:29.007 [2024-11-20 07:13:51.237592] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:29.007 [2024-11-20 07:13:51.237603] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:29.007 [2024-11-20 07:13:51.237607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:29.007 [2024-11-20 07:13:51.238505] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:29.007 [2024-11-20 07:13:51.238512] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:29.007 [2024-11-20 07:13:51.238518] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:29.007 [2024-11-20 07:13:51.239513] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:29.007 [2024-11-20 07:13:51.239520] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:29.007 [2024-11-20 07:13:51.239525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:29.007 [2024-11-20 07:13:51.240516] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:29.007 [2024-11-20 07:13:51.240523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:29.007 [2024-11-20 07:13:51.241523] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:29.007 [2024-11-20 07:13:51.241529] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:29.007 [2024-11-20 07:13:51.241533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:29.007 [2024-11-20 07:13:51.241538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:29.007 [2024-11-20 07:13:51.241643] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:29.007 [2024-11-20 07:13:51.241647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:29.007 [2024-11-20 07:13:51.241650] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:29.007 [2024-11-20 07:13:51.242530] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:29.007 [2024-11-20 07:13:51.243533] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:29.007 [2024-11-20 07:13:51.244538] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:29.007 [2024-11-20 07:13:51.245536] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.007 [2024-11-20 07:13:51.245565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:29.007 [2024-11-20 07:13:51.246545] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:29.007 [2024-11-20 07:13:51.246551] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:29.007 [2024-11-20 07:13:51.246555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:29.007 [2024-11-20 07:13:51.246569] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:29.007 [2024-11-20 07:13:51.246574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:29.007 [2024-11-20 07:13:51.246584] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:29.007 [2024-11-20 07:13:51.246587] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.007 [2024-11-20 07:13:51.246591] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.007 [2024-11-20 07:13:51.246600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.007 [2024-11-20 07:13:51.253164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:29.007 [2024-11-20 07:13:51.253172] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:29.007 [2024-11-20 07:13:51.253176] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:29.007 [2024-11-20 07:13:51.253179] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:29.007 [2024-11-20 07:13:51.253182] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:29.008 [2024-11-20 07:13:51.253187] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:29.008 [2024-11-20 07:13:51.253191] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:29.008 [2024-11-20 07:13:51.253194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:29.008 [2024-11-20 07:13:51.253200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:29.008 [2024-11-20 07:13:51.253208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:29.008 [2024-11-20 07:13:51.261163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:29.008 [2024-11-20 07:13:51.261172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.008 [2024-11-20 07:13:51.261178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.008 [2024-11-20 07:13:51.261184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.008 [2024-11-20 07:13:51.261190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.008 [2024-11-20 07:13:51.261194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:29.008 [2024-11-20 07:13:51.261198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:29.008 [2024-11-20 07:13:51.261205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:29.008 [2024-11-20 07:13:51.269162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:29.008 [2024-11-20 07:13:51.269176] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:29.008 [2024-11-20 07:13:51.269179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:29.008 [2024-11-20 07:13:51.269184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:29.008 [2024-11-20 07:13:51.269188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:29.008 [2024-11-20 07:13:51.269196] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:29.008 [2024-11-20 07:13:51.277162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:29.008 [2024-11-20 07:13:51.277208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:29.008 [2024-11-20 07:13:51.277214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:29.008 [2024-11-20 07:13:51.277219] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:29.008 [2024-11-20 07:13:51.277222] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:29.008 [2024-11-20 07:13:51.277225] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.008 [2024-11-20 07:13:51.277229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:29.269 [2024-11-20 07:13:51.285163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:29.269 [2024-11-20 07:13:51.285172] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:29.269 [2024-11-20 07:13:51.285181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:29.269 [2024-11-20 07:13:51.285187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:29.269 [2024-11-20 07:13:51.285192] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:29.269 [2024-11-20 07:13:51.285195] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.269 [2024-11-20 07:13:51.285197] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.269 [2024-11-20 07:13:51.285201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.269 [2024-11-20 07:13:51.293164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:29.269 [2024-11-20 07:13:51.293176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:29.269 [2024-11-20 07:13:51.293182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:29.269 [2024-11-20 07:13:51.293187] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:29.269 [2024-11-20 07:13:51.293190] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.269 [2024-11-20 07:13:51.293192] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.269 [2024-11-20 07:13:51.293197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.269 [2024-11-20 07:13:51.301162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:29.269 [2024-11-20 07:13:51.301169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:29.269 [2024-11-20 07:13:51.301174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:29.269 [2024-11-20 07:13:51.301181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:29.269 [2024-11-20 07:13:51.301186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:29.269 [2024-11-20 07:13:51.301191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:29.269 [2024-11-20 07:13:51.301195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:29.269 [2024-11-20 07:13:51.301198] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:29.269 [2024-11-20 07:13:51.301201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:29.269 [2024-11-20 07:13:51.301205] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:29.269 [2024-11-20 07:13:51.301218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:29.269 [2024-11-20 07:13:51.309162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:29.269 [2024-11-20 07:13:51.309172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:29.270 [2024-11-20 07:13:51.317163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:29.270 [2024-11-20 07:13:51.317172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:29.270 [2024-11-20 07:13:51.325161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:29.270 [2024-11-20 07:13:51.325171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:29.270 [2024-11-20 07:13:51.333161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:29.270 [2024-11-20 07:13:51.333173] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:29.270 [2024-11-20 07:13:51.333176] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:29.270 [2024-11-20 07:13:51.333179] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:29.270 [2024-11-20 07:13:51.333181] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:29.270 [2024-11-20 07:13:51.333184] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:29.270 [2024-11-20 07:13:51.333188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:29.270 [2024-11-20 07:13:51.333194] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:29.270 [2024-11-20 07:13:51.333197] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:29.270 [2024-11-20 07:13:51.333199] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.270 [2024-11-20 07:13:51.333203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:29.270 [2024-11-20 07:13:51.333209] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:29.270 [2024-11-20 07:13:51.333212] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.270 [2024-11-20 07:13:51.333215] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.270 [2024-11-20 07:13:51.333220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.270 [2024-11-20 07:13:51.333225] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:29.270 [2024-11-20 07:13:51.333228] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:29.270 [2024-11-20 07:13:51.333231] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.270 [2024-11-20 07:13:51.333235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:29.270 [2024-11-20 07:13:51.341163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:29.270 [2024-11-20 07:13:51.341173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:29.270 [2024-11-20 07:13:51.341180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:29.270 [2024-11-20 07:13:51.341185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:29.270 ===================================================== 00:15:29.270 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.270 ===================================================== 00:15:29.270 Controller Capabilities/Features 00:15:29.270 ================================ 00:15:29.270 Vendor ID: 4e58 00:15:29.270 Subsystem Vendor ID: 4e58 00:15:29.270 Serial Number: SPDK2 00:15:29.270 Model Number: SPDK bdev Controller 00:15:29.270 Firmware Version: 25.01 00:15:29.270 Recommended Arb Burst: 6 00:15:29.270 IEEE OUI Identifier: 8d 6b 50 00:15:29.270 Multi-path I/O 00:15:29.270 May have multiple subsystem ports: Yes 00:15:29.270 May have multiple controllers: Yes 00:15:29.270 Associated with SR-IOV VF: No 00:15:29.270 Max Data Transfer Size: 131072 00:15:29.270 Max Number of Namespaces: 32 00:15:29.270 Max Number of I/O Queues: 127 00:15:29.270 NVMe Specification Version (VS): 1.3 00:15:29.270 NVMe Specification Version (Identify): 1.3 00:15:29.270 Maximum Queue Entries: 256 00:15:29.270 Contiguous Queues Required: Yes 00:15:29.270 Arbitration Mechanisms Supported 00:15:29.270 Weighted Round Robin: Not Supported 00:15:29.270 Vendor Specific: Not Supported 00:15:29.270 Reset Timeout: 15000 ms 00:15:29.270 Doorbell Stride: 4 bytes 00:15:29.270 NVM Subsystem Reset: Not Supported 00:15:29.270 Command Sets Supported 00:15:29.270 NVM Command Set: Supported 00:15:29.270 Boot Partition: Not Supported 00:15:29.270 Memory Page Size Minimum: 4096 bytes 00:15:29.270 Memory Page Size Maximum: 4096 bytes 00:15:29.270 Persistent Memory Region: Not Supported 00:15:29.270 Optional Asynchronous Events Supported 00:15:29.270 Namespace Attribute Notices: Supported 00:15:29.270 Firmware Activation Notices: Not Supported 00:15:29.270 ANA Change Notices: Not Supported 00:15:29.270 PLE Aggregate Log Change Notices: Not Supported 00:15:29.270 LBA Status Info Alert Notices: Not Supported 00:15:29.270 EGE Aggregate Log Change Notices: Not Supported 00:15:29.270 Normal NVM Subsystem Shutdown event: Not Supported 00:15:29.270 Zone Descriptor Change Notices: Not Supported 00:15:29.270 Discovery Log Change Notices: Not Supported 00:15:29.270 Controller Attributes 00:15:29.270 128-bit Host Identifier: Supported 00:15:29.270 Non-Operational Permissive Mode: Not Supported 00:15:29.270 NVM Sets: Not Supported 00:15:29.270 Read Recovery Levels: Not Supported 00:15:29.270 Endurance Groups: Not Supported 00:15:29.270 Predictable Latency Mode: Not Supported 00:15:29.270 Traffic Based Keep ALive: Not Supported 00:15:29.270 Namespace Granularity: Not Supported 00:15:29.270 SQ Associations: Not Supported 00:15:29.270 UUID List: Not Supported 00:15:29.270 Multi-Domain Subsystem: Not Supported 00:15:29.270 Fixed Capacity Management: Not Supported 00:15:29.270 Variable Capacity Management: Not Supported 00:15:29.270 Delete Endurance Group: Not Supported 00:15:29.270 Delete NVM Set: Not Supported 00:15:29.270 Extended LBA Formats Supported: Not Supported 00:15:29.270 Flexible Data Placement Supported: Not Supported 00:15:29.270 00:15:29.270 Controller Memory Buffer Support 00:15:29.270 ================================ 00:15:29.270 Supported: No 00:15:29.270 00:15:29.270 Persistent Memory Region Support 00:15:29.270 ================================ 00:15:29.270 Supported: No 00:15:29.270 00:15:29.270 Admin Command Set Attributes 00:15:29.270 ============================ 00:15:29.270 Security Send/Receive: Not Supported 00:15:29.270 Format NVM: Not Supported 00:15:29.270 Firmware Activate/Download: Not Supported 00:15:29.270 Namespace Management: Not Supported 00:15:29.270 Device Self-Test: Not Supported 00:15:29.270 Directives: Not Supported 00:15:29.270 NVMe-MI: Not Supported 00:15:29.270 Virtualization Management: Not Supported 00:15:29.270 Doorbell Buffer Config: Not Supported 00:15:29.270 Get LBA Status Capability: Not Supported 00:15:29.270 Command & Feature Lockdown Capability: Not Supported 00:15:29.270 Abort Command Limit: 4 00:15:29.270 Async Event Request Limit: 4 00:15:29.270 Number of Firmware Slots: N/A 00:15:29.270 Firmware Slot 1 Read-Only: N/A 00:15:29.270 Firmware Activation Without Reset: N/A 00:15:29.270 Multiple Update Detection Support: N/A 00:15:29.270 Firmware Update Granularity: No Information Provided 00:15:29.270 Per-Namespace SMART Log: No 00:15:29.270 Asymmetric Namespace Access Log Page: Not Supported 00:15:29.270 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:29.270 Command Effects Log Page: Supported 00:15:29.270 Get Log Page Extended Data: Supported 00:15:29.270 Telemetry Log Pages: Not Supported 00:15:29.270 Persistent Event Log Pages: Not Supported 00:15:29.270 Supported Log Pages Log Page: May Support 00:15:29.270 Commands Supported & Effects Log Page: Not Supported 00:15:29.270 Feature Identifiers & Effects Log Page:May Support 00:15:29.270 NVMe-MI Commands & Effects Log Page: May Support 00:15:29.270 Data Area 4 for Telemetry Log: Not Supported 00:15:29.270 Error Log Page Entries Supported: 128 00:15:29.270 Keep Alive: Supported 00:15:29.270 Keep Alive Granularity: 10000 ms 00:15:29.270 00:15:29.270 NVM Command Set Attributes 00:15:29.270 ========================== 00:15:29.270 Submission Queue Entry Size 00:15:29.270 Max: 64 00:15:29.270 Min: 64 00:15:29.270 Completion Queue Entry Size 00:15:29.270 Max: 16 00:15:29.270 Min: 16 00:15:29.270 Number of Namespaces: 32 00:15:29.270 Compare Command: Supported 00:15:29.270 Write Uncorrectable Command: Not Supported 00:15:29.270 Dataset Management Command: Supported 00:15:29.270 Write Zeroes Command: Supported 00:15:29.270 Set Features Save Field: Not Supported 00:15:29.270 Reservations: Not Supported 00:15:29.270 Timestamp: Not Supported 00:15:29.270 Copy: Supported 00:15:29.270 Volatile Write Cache: Present 00:15:29.270 Atomic Write Unit (Normal): 1 00:15:29.270 Atomic Write Unit (PFail): 1 00:15:29.270 Atomic Compare & Write Unit: 1 00:15:29.270 Fused Compare & Write: Supported 00:15:29.270 Scatter-Gather List 00:15:29.270 SGL Command Set: Supported (Dword aligned) 00:15:29.270 SGL Keyed: Not Supported 00:15:29.270 SGL Bit Bucket Descriptor: Not Supported 00:15:29.271 SGL Metadata Pointer: Not Supported 00:15:29.271 Oversized SGL: Not Supported 00:15:29.271 SGL Metadata Address: Not Supported 00:15:29.271 SGL Offset: Not Supported 00:15:29.271 Transport SGL Data Block: Not Supported 00:15:29.271 Replay Protected Memory Block: Not Supported 00:15:29.271 00:15:29.271 Firmware Slot Information 00:15:29.271 ========================= 00:15:29.271 Active slot: 1 00:15:29.271 Slot 1 Firmware Revision: 25.01 00:15:29.271 00:15:29.271 00:15:29.271 Commands Supported and Effects 00:15:29.271 ============================== 00:15:29.271 Admin Commands 00:15:29.271 -------------- 00:15:29.271 Get Log Page (02h): Supported 00:15:29.271 Identify (06h): Supported 00:15:29.271 Abort (08h): Supported 00:15:29.271 Set Features (09h): Supported 00:15:29.271 Get Features (0Ah): Supported 00:15:29.271 Asynchronous Event Request (0Ch): Supported 00:15:29.271 Keep Alive (18h): Supported 00:15:29.271 I/O Commands 00:15:29.271 ------------ 00:15:29.271 Flush (00h): Supported LBA-Change 00:15:29.271 Write (01h): Supported LBA-Change 00:15:29.271 Read (02h): Supported 00:15:29.271 Compare (05h): Supported 00:15:29.271 Write Zeroes (08h): Supported LBA-Change 00:15:29.271 Dataset Management (09h): Supported LBA-Change 00:15:29.271 Copy (19h): Supported LBA-Change 00:15:29.271 00:15:29.271 Error Log 00:15:29.271 ========= 00:15:29.271 00:15:29.271 Arbitration 00:15:29.271 =========== 00:15:29.271 Arbitration Burst: 1 00:15:29.271 00:15:29.271 Power Management 00:15:29.271 ================ 00:15:29.271 Number of Power States: 1 00:15:29.271 Current Power State: Power State #0 00:15:29.271 Power State #0: 00:15:29.271 Max Power: 0.00 W 00:15:29.271 Non-Operational State: Operational 00:15:29.271 Entry Latency: Not Reported 00:15:29.271 Exit Latency: Not Reported 00:15:29.271 Relative Read Throughput: 0 00:15:29.271 Relative Read Latency: 0 00:15:29.271 Relative Write Throughput: 0 00:15:29.271 Relative Write Latency: 0 00:15:29.271 Idle Power: Not Reported 00:15:29.271 Active Power: Not Reported 00:15:29.271 Non-Operational Permissive Mode: Not Supported 00:15:29.271 00:15:29.271 Health Information 00:15:29.271 ================== 00:15:29.271 Critical Warnings: 00:15:29.271 Available Spare Space: OK 00:15:29.271 Temperature: OK 00:15:29.271 Device Reliability: OK 00:15:29.271 Read Only: No 00:15:29.271 Volatile Memory Backup: OK 00:15:29.271 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:29.271 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:29.271 Available Spare: 0% 00:15:29.271 Available Sp[2024-11-20 07:13:51.341260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:29.271 [2024-11-20 07:13:51.349162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:29.271 [2024-11-20 07:13:51.349183] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:29.271 [2024-11-20 07:13:51.349190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.271 [2024-11-20 07:13:51.349195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.271 [2024-11-20 07:13:51.349200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.271 [2024-11-20 07:13:51.349204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.271 [2024-11-20 07:13:51.349243] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:29.271 [2024-11-20 07:13:51.349250] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:29.271 [2024-11-20 07:13:51.350241] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.271 [2024-11-20 07:13:51.350276] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:29.271 [2024-11-20 07:13:51.350281] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:29.271 [2024-11-20 07:13:51.351250] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:29.271 [2024-11-20 07:13:51.351258] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:29.271 [2024-11-20 07:13:51.351301] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:29.271 [2024-11-20 07:13:51.354163] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:29.271 are Threshold: 0% 00:15:29.271 Life Percentage Used: 0% 00:15:29.271 Data Units Read: 0 00:15:29.271 Data Units Written: 0 00:15:29.271 Host Read Commands: 0 00:15:29.271 Host Write Commands: 0 00:15:29.271 Controller Busy Time: 0 minutes 00:15:29.271 Power Cycles: 0 00:15:29.271 Power On Hours: 0 hours 00:15:29.271 Unsafe Shutdowns: 0 00:15:29.271 Unrecoverable Media Errors: 0 00:15:29.271 Lifetime Error Log Entries: 0 00:15:29.271 Warning Temperature Time: 0 minutes 00:15:29.271 Critical Temperature Time: 0 minutes 00:15:29.271 00:15:29.271 Number of Queues 00:15:29.271 ================ 00:15:29.271 Number of I/O Submission Queues: 127 00:15:29.271 Number of I/O Completion Queues: 127 00:15:29.271 00:15:29.271 Active Namespaces 00:15:29.271 ================= 00:15:29.271 Namespace ID:1 00:15:29.271 Error Recovery Timeout: Unlimited 00:15:29.271 Command Set Identifier: NVM (00h) 00:15:29.271 Deallocate: Supported 00:15:29.271 Deallocated/Unwritten Error: Not Supported 00:15:29.271 Deallocated Read Value: Unknown 00:15:29.271 Deallocate in Write Zeroes: Not Supported 00:15:29.271 Deallocated Guard Field: 0xFFFF 00:15:29.271 Flush: Supported 00:15:29.271 Reservation: Supported 00:15:29.271 Namespace Sharing Capabilities: Multiple Controllers 00:15:29.271 Size (in LBAs): 131072 (0GiB) 00:15:29.271 Capacity (in LBAs): 131072 (0GiB) 00:15:29.271 Utilization (in LBAs): 131072 (0GiB) 00:15:29.271 NGUID: C84E5ECEB943408A95DC2D5D9675B4E8 00:15:29.271 UUID: c84e5ece-b943-408a-95dc-2d5d9675b4e8 00:15:29.271 Thin Provisioning: Not Supported 00:15:29.271 Per-NS Atomic Units: Yes 00:15:29.271 Atomic Boundary Size (Normal): 0 00:15:29.271 Atomic Boundary Size (PFail): 0 00:15:29.271 Atomic Boundary Offset: 0 00:15:29.271 Maximum Single Source Range Length: 65535 00:15:29.271 Maximum Copy Length: 65535 00:15:29.271 Maximum Source Range Count: 1 00:15:29.271 NGUID/EUI64 Never Reused: No 00:15:29.271 Namespace Write Protected: No 00:15:29.271 Number of LBA Formats: 1 00:15:29.271 Current LBA Format: LBA Format #00 00:15:29.271 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:29.271 00:15:29.271 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:29.271 [2024-11-20 07:13:51.541214] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.553 Initializing NVMe Controllers 00:15:34.553 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.553 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:34.553 Initialization complete. Launching workers. 00:15:34.553 ======================================================== 00:15:34.553 Latency(us) 00:15:34.553 Device Information : IOPS MiB/s Average min max 00:15:34.553 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40061.17 156.49 3194.98 844.48 8834.42 00:15:34.553 ======================================================== 00:15:34.553 Total : 40061.17 156.49 3194.98 844.48 8834.42 00:15:34.553 00:15:34.553 [2024-11-20 07:13:56.652351] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.553 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:34.813 [2024-11-20 07:13:56.842966] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:40.093 Initializing NVMe Controllers 00:15:40.093 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:40.093 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:40.093 Initialization complete. Launching workers. 00:15:40.093 ======================================================== 00:15:40.093 Latency(us) 00:15:40.093 Device Information : IOPS MiB/s Average min max 00:15:40.093 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39950.47 156.06 3203.64 851.50 6816.24 00:15:40.093 ======================================================== 00:15:40.093 Total : 39950.47 156.06 3203.64 851.50 6816.24 00:15:40.093 00:15:40.093 [2024-11-20 07:14:01.860626] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.093 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:40.093 [2024-11-20 07:14:02.058772] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:45.375 [2024-11-20 07:14:07.188248] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:45.375 Initializing NVMe Controllers 00:15:45.375 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:45.375 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:45.375 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:45.375 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:45.375 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:45.375 Initialization complete. Launching workers. 00:15:45.375 Starting thread on core 2 00:15:45.375 Starting thread on core 3 00:15:45.375 Starting thread on core 1 00:15:45.375 07:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:45.375 [2024-11-20 07:14:07.433463] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:49.578 [2024-11-20 07:14:11.143294] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:49.578 Initializing NVMe Controllers 00:15:49.578 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:49.578 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:49.578 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:49.578 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:49.578 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:49.578 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:49.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:49.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:49.578 Initialization complete. Launching workers. 00:15:49.578 Starting thread on core 1 with urgent priority queue 00:15:49.578 Starting thread on core 2 with urgent priority queue 00:15:49.578 Starting thread on core 3 with urgent priority queue 00:15:49.578 Starting thread on core 0 with urgent priority queue 00:15:49.578 SPDK bdev Controller (SPDK2 ) core 0: 6973.00 IO/s 14.34 secs/100000 ios 00:15:49.578 SPDK bdev Controller (SPDK2 ) core 1: 5657.00 IO/s 17.68 secs/100000 ios 00:15:49.578 SPDK bdev Controller (SPDK2 ) core 2: 5172.67 IO/s 19.33 secs/100000 ios 00:15:49.578 SPDK bdev Controller (SPDK2 ) core 3: 5788.67 IO/s 17.28 secs/100000 ios 00:15:49.578 ======================================================== 00:15:49.578 00:15:49.578 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:49.578 [2024-11-20 07:14:11.381234] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:49.578 Initializing NVMe Controllers 00:15:49.578 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:49.578 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:49.578 Namespace ID: 1 size: 0GB 00:15:49.578 Initialization complete. 00:15:49.578 INFO: using host memory buffer for IO 00:15:49.578 Hello world! 00:15:49.578 [2024-11-20 07:14:11.393313] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:49.578 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:49.578 [2024-11-20 07:14:11.637541] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:50.517 Initializing NVMe Controllers 00:15:50.517 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.517 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.517 Initialization complete. Launching workers. 00:15:50.517 submit (in ns) avg, min, max = 6264.1, 2821.7, 3998683.3 00:15:50.517 complete (in ns) avg, min, max = 14467.5, 1636.7, 3998234.2 00:15:50.517 00:15:50.517 Submit histogram 00:15:50.518 ================ 00:15:50.518 Range in us Cumulative Count 00:15:50.518 2.813 - 2.827: 0.0821% ( 17) 00:15:50.518 2.827 - 2.840: 0.5942% ( 106) 00:15:50.518 2.840 - 2.853: 1.9758% ( 286) 00:15:50.518 2.853 - 2.867: 4.6763% ( 559) 00:15:50.518 2.867 - 2.880: 9.8116% ( 1063) 00:15:50.518 2.880 - 2.893: 15.1498% ( 1105) 00:15:50.518 2.893 - 2.907: 20.2029% ( 1046) 00:15:50.518 2.907 - 2.920: 24.7101% ( 933) 00:15:50.518 2.920 - 2.933: 29.3333% ( 957) 00:15:50.518 2.933 - 2.947: 34.5459% ( 1079) 00:15:50.518 2.947 - 2.960: 40.0097% ( 1131) 00:15:50.518 2.960 - 2.973: 45.5507% ( 1147) 00:15:50.518 2.973 - 2.987: 51.2271% ( 1175) 00:15:50.518 2.987 - 3.000: 59.3478% ( 1681) 00:15:50.518 3.000 - 3.013: 68.4976% ( 1894) 00:15:50.518 3.013 - 3.027: 77.7440% ( 1914) 00:15:50.518 3.027 - 3.040: 84.8792% ( 1477) 00:15:50.518 3.040 - 3.053: 90.6377% ( 1192) 00:15:50.518 3.053 - 3.067: 94.7488% ( 851) 00:15:50.518 3.067 - 3.080: 97.2174% ( 511) 00:15:50.518 3.080 - 3.093: 98.3623% ( 237) 00:15:50.518 3.093 - 3.107: 98.8647% ( 104) 00:15:50.518 3.107 - 3.120: 99.2271% ( 75) 00:15:50.518 3.120 - 3.133: 99.3865% ( 33) 00:15:50.518 3.133 - 3.147: 99.4589% ( 15) 00:15:50.518 3.147 - 3.160: 99.5121% ( 11) 00:15:50.518 3.160 - 3.173: 99.5362% ( 5) 00:15:50.518 3.173 - 3.187: 99.5459% ( 2) 00:15:50.518 3.187 - 3.200: 99.5556% ( 2) 00:15:50.518 3.200 - 3.213: 99.5652% ( 2) 00:15:50.518 3.240 - 3.253: 99.5700% ( 1) 00:15:50.518 3.253 - 3.267: 99.5749% ( 1) 00:15:50.518 3.333 - 3.347: 99.5797% ( 1) 00:15:50.518 3.387 - 3.400: 99.5845% ( 1) 00:15:50.518 3.627 - 3.653: 99.5990% ( 3) 00:15:50.518 4.293 - 4.320: 99.6039% ( 1) 00:15:50.518 4.347 - 4.373: 99.6087% ( 1) 00:15:50.518 4.373 - 4.400: 99.6135% ( 1) 00:15:50.518 4.480 - 4.507: 99.6184% ( 1) 00:15:50.518 4.667 - 4.693: 99.6232% ( 1) 00:15:50.518 4.773 - 4.800: 99.6329% ( 2) 00:15:50.518 4.960 - 4.987: 99.6377% ( 1) 00:15:50.518 4.987 - 5.013: 99.6473% ( 2) 00:15:50.518 5.013 - 5.040: 99.6667% ( 4) 00:15:50.518 5.067 - 5.093: 99.6763% ( 2) 00:15:50.518 5.120 - 5.147: 99.6812% ( 1) 00:15:50.518 5.467 - 5.493: 99.6860% ( 1) 00:15:50.518 5.600 - 5.627: 99.6908% ( 1) 00:15:50.518 5.627 - 5.653: 99.6957% ( 1) 00:15:50.518 5.680 - 5.707: 99.7005% ( 1) 00:15:50.518 5.733 - 5.760: 99.7053% ( 1) 00:15:50.518 5.760 - 5.787: 99.7150% ( 2) 00:15:50.518 5.840 - 5.867: 99.7198% ( 1) 00:15:50.518 5.947 - 5.973: 99.7295% ( 2) 00:15:50.518 5.973 - 6.000: 99.7440% ( 3) 00:15:50.518 6.000 - 6.027: 99.7488% ( 1) 00:15:50.518 6.027 - 6.053: 99.7536% ( 1) 00:15:50.518 6.053 - 6.080: 99.7585% ( 1) 00:15:50.518 6.107 - 6.133: 99.7633% ( 1) 00:15:50.518 6.160 - 6.187: 99.7681% ( 1) 00:15:50.518 6.187 - 6.213: 99.7778% ( 2) 00:15:50.518 6.213 - 6.240: 99.7826% ( 1) 00:15:50.518 6.347 - 6.373: 99.7874% ( 1) 00:15:50.518 6.427 - 6.453: 99.8116% ( 5) 00:15:50.518 6.453 - 6.480: 99.8164% ( 1) 00:15:50.518 6.480 - 6.507: 99.8213% ( 1) 00:15:50.518 6.507 - 6.533: 99.8309% ( 2) 00:15:50.518 6.533 - 6.560: 99.8357% ( 1) 00:15:50.518 6.587 - 6.613: 99.8454% ( 2) 00:15:50.518 6.720 - 6.747: 99.8502% ( 1) 00:15:50.518 6.773 - 6.800: 99.8551% ( 1) 00:15:50.518 6.827 - 6.880: 99.8599% ( 1) 00:15:50.518 6.880 - 6.933: 99.8744% ( 3) 00:15:50.518 6.987 - 7.040: 99.8792% ( 1) 00:15:50.518 7.147 - 7.200: 99.8841% ( 1) 00:15:50.518 7.253 - 7.307: 99.8889% ( 1) 00:15:50.518 7.413 - 7.467: 99.8937% ( 1) 00:15:50.518 7.787 - 7.840: 99.8986% ( 1) 00:15:50.518 7.893 - 7.947: 99.9034% ( 1) 00:15:50.518 [2024-11-20 07:14:12.729687] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:50.518 11.253 - 11.307: 99.9082% ( 1) 00:15:50.518 11.840 - 11.893: 99.9130% ( 1) 00:15:50.518 12.107 - 12.160: 99.9179% ( 1) 00:15:50.518 3986.773 - 4014.080: 100.0000% ( 17) 00:15:50.518 00:15:50.518 Complete histogram 00:15:50.518 ================== 00:15:50.518 Range in us Cumulative Count 00:15:50.518 1.633 - 1.640: 0.0048% ( 1) 00:15:50.518 1.640 - 1.647: 0.5362% ( 110) 00:15:50.518 1.647 - 1.653: 0.8019% ( 55) 00:15:50.518 1.653 - 1.660: 0.9034% ( 21) 00:15:50.518 1.660 - 1.667: 1.0338% ( 27) 00:15:50.518 1.667 - 1.673: 1.1594% ( 26) 00:15:50.518 1.673 - 1.680: 1.1981% ( 8) 00:15:50.518 1.680 - 1.687: 1.2319% ( 7) 00:15:50.518 1.687 - 1.693: 1.2367% ( 1) 00:15:50.518 1.693 - 1.700: 1.4155% ( 37) 00:15:50.518 1.700 - 1.707: 40.4058% ( 8071) 00:15:50.518 1.707 - 1.720: 68.4058% ( 5796) 00:15:50.518 1.720 - 1.733: 85.5556% ( 3550) 00:15:50.518 1.733 - 1.747: 92.6618% ( 1471) 00:15:50.518 1.747 - 1.760: 94.3333% ( 346) 00:15:50.518 1.760 - 1.773: 95.7633% ( 296) 00:15:50.518 1.773 - 1.787: 97.4734% ( 354) 00:15:50.518 1.787 - 1.800: 98.6812% ( 250) 00:15:50.518 1.800 - 1.813: 99.2271% ( 113) 00:15:50.518 1.813 - 1.827: 99.4155% ( 39) 00:15:50.518 1.827 - 1.840: 99.4493% ( 7) 00:15:50.518 1.867 - 1.880: 99.4589% ( 2) 00:15:50.518 1.893 - 1.907: 99.4686% ( 2) 00:15:50.518 1.907 - 1.920: 99.4734% ( 1) 00:15:50.518 1.920 - 1.933: 99.4783% ( 1) 00:15:50.518 1.933 - 1.947: 99.4831% ( 1) 00:15:50.518 4.320 - 4.347: 99.4879% ( 1) 00:15:50.518 4.400 - 4.427: 99.4928% ( 1) 00:15:50.518 4.427 - 4.453: 99.4976% ( 1) 00:15:50.518 4.453 - 4.480: 99.5024% ( 1) 00:15:50.518 4.480 - 4.507: 99.5121% ( 2) 00:15:50.518 4.507 - 4.533: 99.5169% ( 1) 00:15:50.518 4.587 - 4.613: 99.5217% ( 1) 00:15:50.518 4.667 - 4.693: 99.5266% ( 1) 00:15:50.518 4.693 - 4.720: 99.5362% ( 2) 00:15:50.518 4.720 - 4.747: 99.5411% ( 1) 00:15:50.518 4.773 - 4.800: 99.5459% ( 1) 00:15:50.518 4.800 - 4.827: 99.5604% ( 3) 00:15:50.518 4.853 - 4.880: 99.5652% ( 1) 00:15:50.518 4.933 - 4.960: 99.5700% ( 1) 00:15:50.518 4.960 - 4.987: 99.5749% ( 1) 00:15:50.518 5.093 - 5.120: 99.5845% ( 2) 00:15:50.518 5.120 - 5.147: 99.5894% ( 1) 00:15:50.518 5.147 - 5.173: 99.5942% ( 1) 00:15:50.518 5.200 - 5.227: 99.5990% ( 1) 00:15:50.518 5.520 - 5.547: 99.6087% ( 2) 00:15:50.518 5.573 - 5.600: 99.6135% ( 1) 00:15:50.518 6.187 - 6.213: 99.6184% ( 1) 00:15:50.518 6.320 - 6.347: 99.6232% ( 1) 00:15:50.518 6.347 - 6.373: 99.6329% ( 2) 00:15:50.518 6.400 - 6.427: 99.6377% ( 1) 00:15:50.518 6.560 - 6.587: 99.6425% ( 1) 00:15:50.518 6.987 - 7.040: 99.6473% ( 1) 00:15:50.518 7.147 - 7.200: 99.6522% ( 1) 00:15:50.518 7.307 - 7.360: 99.6570% ( 1) 00:15:50.518 7.627 - 7.680: 99.6618% ( 1) 00:15:50.518 8.747 - 8.800: 99.6667% ( 1) 00:15:50.518 15.893 - 16.000: 99.6715% ( 1) 00:15:50.518 31.573 - 31.787: 99.6763% ( 1) 00:15:50.518 112.640 - 113.493: 99.6812% ( 1) 00:15:50.518 3986.773 - 4014.080: 100.0000% ( 66) 00:15:50.518 00:15:50.518 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:50.518 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:50.518 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:50.518 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:50.518 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:50.779 [ 00:15:50.779 { 00:15:50.779 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:50.779 "subtype": "Discovery", 00:15:50.779 "listen_addresses": [], 00:15:50.779 "allow_any_host": true, 00:15:50.779 "hosts": [] 00:15:50.779 }, 00:15:50.779 { 00:15:50.779 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:50.779 "subtype": "NVMe", 00:15:50.779 "listen_addresses": [ 00:15:50.779 { 00:15:50.779 "trtype": "VFIOUSER", 00:15:50.779 "adrfam": "IPv4", 00:15:50.779 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:50.779 "trsvcid": "0" 00:15:50.779 } 00:15:50.779 ], 00:15:50.779 "allow_any_host": true, 00:15:50.779 "hosts": [], 00:15:50.779 "serial_number": "SPDK1", 00:15:50.779 "model_number": "SPDK bdev Controller", 00:15:50.779 "max_namespaces": 32, 00:15:50.779 "min_cntlid": 1, 00:15:50.779 "max_cntlid": 65519, 00:15:50.779 "namespaces": [ 00:15:50.779 { 00:15:50.779 "nsid": 1, 00:15:50.779 "bdev_name": "Malloc1", 00:15:50.779 "name": "Malloc1", 00:15:50.779 "nguid": "C0B78BCF1F0646489663FAFD96EC2591", 00:15:50.779 "uuid": "c0b78bcf-1f06-4648-9663-fafd96ec2591" 00:15:50.779 }, 00:15:50.779 { 00:15:50.779 "nsid": 2, 00:15:50.779 "bdev_name": "Malloc3", 00:15:50.779 "name": "Malloc3", 00:15:50.779 "nguid": "677735EBB0F14D3A972BB56821244E73", 00:15:50.779 "uuid": "677735eb-b0f1-4d3a-972b-b56821244e73" 00:15:50.779 } 00:15:50.779 ] 00:15:50.779 }, 00:15:50.779 { 00:15:50.779 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:50.779 "subtype": "NVMe", 00:15:50.779 "listen_addresses": [ 00:15:50.779 { 00:15:50.779 "trtype": "VFIOUSER", 00:15:50.779 "adrfam": "IPv4", 00:15:50.779 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:50.779 "trsvcid": "0" 00:15:50.779 } 00:15:50.779 ], 00:15:50.779 "allow_any_host": true, 00:15:50.779 "hosts": [], 00:15:50.779 "serial_number": "SPDK2", 00:15:50.779 "model_number": "SPDK bdev Controller", 00:15:50.779 "max_namespaces": 32, 00:15:50.779 "min_cntlid": 1, 00:15:50.779 "max_cntlid": 65519, 00:15:50.779 "namespaces": [ 00:15:50.779 { 00:15:50.779 "nsid": 1, 00:15:50.779 "bdev_name": "Malloc2", 00:15:50.779 "name": "Malloc2", 00:15:50.779 "nguid": "C84E5ECEB943408A95DC2D5D9675B4E8", 00:15:50.779 "uuid": "c84e5ece-b943-408a-95dc-2d5d9675b4e8" 00:15:50.779 } 00:15:50.779 ] 00:15:50.779 } 00:15:50.779 ] 00:15:50.779 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:50.779 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3477614 00:15:50.779 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:50.779 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:50.779 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:50.779 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:50.779 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:50.779 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:50.779 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:50.779 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:51.039 [2024-11-20 07:14:13.111830] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:51.039 Malloc4 00:15:51.039 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:51.039 [2024-11-20 07:14:13.306129] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:51.299 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:51.299 Asynchronous Event Request test 00:15:51.300 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:51.300 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:51.300 Registering asynchronous event callbacks... 00:15:51.300 Starting namespace attribute notice tests for all controllers... 00:15:51.300 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:51.300 aer_cb - Changed Namespace 00:15:51.300 Cleaning up... 00:15:51.300 [ 00:15:51.300 { 00:15:51.300 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:51.300 "subtype": "Discovery", 00:15:51.300 "listen_addresses": [], 00:15:51.300 "allow_any_host": true, 00:15:51.300 "hosts": [] 00:15:51.300 }, 00:15:51.300 { 00:15:51.300 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:51.300 "subtype": "NVMe", 00:15:51.300 "listen_addresses": [ 00:15:51.300 { 00:15:51.300 "trtype": "VFIOUSER", 00:15:51.300 "adrfam": "IPv4", 00:15:51.300 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:51.300 "trsvcid": "0" 00:15:51.300 } 00:15:51.300 ], 00:15:51.300 "allow_any_host": true, 00:15:51.300 "hosts": [], 00:15:51.300 "serial_number": "SPDK1", 00:15:51.300 "model_number": "SPDK bdev Controller", 00:15:51.300 "max_namespaces": 32, 00:15:51.300 "min_cntlid": 1, 00:15:51.300 "max_cntlid": 65519, 00:15:51.300 "namespaces": [ 00:15:51.300 { 00:15:51.300 "nsid": 1, 00:15:51.300 "bdev_name": "Malloc1", 00:15:51.300 "name": "Malloc1", 00:15:51.300 "nguid": "C0B78BCF1F0646489663FAFD96EC2591", 00:15:51.300 "uuid": "c0b78bcf-1f06-4648-9663-fafd96ec2591" 00:15:51.300 }, 00:15:51.300 { 00:15:51.300 "nsid": 2, 00:15:51.300 "bdev_name": "Malloc3", 00:15:51.300 "name": "Malloc3", 00:15:51.300 "nguid": "677735EBB0F14D3A972BB56821244E73", 00:15:51.300 "uuid": "677735eb-b0f1-4d3a-972b-b56821244e73" 00:15:51.300 } 00:15:51.300 ] 00:15:51.300 }, 00:15:51.300 { 00:15:51.300 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:51.300 "subtype": "NVMe", 00:15:51.300 "listen_addresses": [ 00:15:51.300 { 00:15:51.300 "trtype": "VFIOUSER", 00:15:51.300 "adrfam": "IPv4", 00:15:51.300 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:51.300 "trsvcid": "0" 00:15:51.300 } 00:15:51.300 ], 00:15:51.300 "allow_any_host": true, 00:15:51.300 "hosts": [], 00:15:51.300 "serial_number": "SPDK2", 00:15:51.300 "model_number": "SPDK bdev Controller", 00:15:51.300 "max_namespaces": 32, 00:15:51.300 "min_cntlid": 1, 00:15:51.300 "max_cntlid": 65519, 00:15:51.300 "namespaces": [ 00:15:51.300 { 00:15:51.300 "nsid": 1, 00:15:51.300 "bdev_name": "Malloc2", 00:15:51.300 "name": "Malloc2", 00:15:51.300 "nguid": "C84E5ECEB943408A95DC2D5D9675B4E8", 00:15:51.300 "uuid": "c84e5ece-b943-408a-95dc-2d5d9675b4e8" 00:15:51.300 }, 00:15:51.300 { 00:15:51.300 "nsid": 2, 00:15:51.300 "bdev_name": "Malloc4", 00:15:51.300 "name": "Malloc4", 00:15:51.300 "nguid": "071A399782664E9885496FC727425374", 00:15:51.300 "uuid": "071a3997-8266-4e98-8549-6fc727425374" 00:15:51.300 } 00:15:51.300 ] 00:15:51.300 } 00:15:51.300 ] 00:15:51.300 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3477614 00:15:51.300 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:51.300 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3468347 00:15:51.300 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3468347 ']' 00:15:51.300 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3468347 00:15:51.300 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:51.300 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:51.300 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3468347 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3468347' 00:15:51.561 killing process with pid 3468347 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3468347 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3468347 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3477636 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3477636' 00:15:51.561 Process pid: 3477636 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3477636 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3477636 ']' 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:51.561 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:51.561 [2024-11-20 07:14:13.787028] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:51.561 [2024-11-20 07:14:13.787955] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:15:51.561 [2024-11-20 07:14:13.788000] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.821 [2024-11-20 07:14:13.873637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.821 [2024-11-20 07:14:13.908715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.821 [2024-11-20 07:14:13.908747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.821 [2024-11-20 07:14:13.908753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.821 [2024-11-20 07:14:13.908758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.821 [2024-11-20 07:14:13.908762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.821 [2024-11-20 07:14:13.910201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.821 [2024-11-20 07:14:13.910282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.821 [2024-11-20 07:14:13.910431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.821 [2024-11-20 07:14:13.910433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.821 [2024-11-20 07:14:13.962991] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:51.821 [2024-11-20 07:14:13.963899] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:51.821 [2024-11-20 07:14:13.964894] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:51.821 [2024-11-20 07:14:13.965664] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:51.821 [2024-11-20 07:14:13.965672] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:52.392 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:52.392 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:15:52.392 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:53.332 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:53.593 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:53.593 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:53.593 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:53.593 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:53.593 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:53.854 Malloc1 00:15:53.854 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:54.114 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:54.374 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:54.374 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:54.374 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:54.374 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:54.634 Malloc2 00:15:54.634 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:54.894 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:54.894 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:55.155 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:55.155 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3477636 00:15:55.155 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3477636 ']' 00:15:55.155 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3477636 00:15:55.155 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:55.155 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:55.156 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3477636 00:15:55.156 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:55.156 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:55.156 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3477636' 00:15:55.156 killing process with pid 3477636 00:15:55.156 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3477636 00:15:55.156 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3477636 00:15:55.417 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:55.417 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:55.417 00:15:55.417 real 0m51.649s 00:15:55.417 user 3m17.947s 00:15:55.417 sys 0m2.715s 00:15:55.417 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:55.417 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:55.417 ************************************ 00:15:55.417 END TEST nvmf_vfio_user 00:15:55.417 ************************************ 00:15:55.417 07:14:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:55.417 07:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:55.417 07:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:55.417 07:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:55.417 ************************************ 00:15:55.417 START TEST nvmf_vfio_user_nvme_compliance 00:15:55.417 ************************************ 00:15:55.417 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:55.678 * Looking for test storage... 00:15:55.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:55.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.678 --rc genhtml_branch_coverage=1 00:15:55.678 --rc genhtml_function_coverage=1 00:15:55.678 --rc genhtml_legend=1 00:15:55.678 --rc geninfo_all_blocks=1 00:15:55.678 --rc geninfo_unexecuted_blocks=1 00:15:55.678 00:15:55.678 ' 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:55.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.678 --rc genhtml_branch_coverage=1 00:15:55.678 --rc genhtml_function_coverage=1 00:15:55.678 --rc genhtml_legend=1 00:15:55.678 --rc geninfo_all_blocks=1 00:15:55.678 --rc geninfo_unexecuted_blocks=1 00:15:55.678 00:15:55.678 ' 00:15:55.678 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:55.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.679 --rc genhtml_branch_coverage=1 00:15:55.679 --rc genhtml_function_coverage=1 00:15:55.679 --rc genhtml_legend=1 00:15:55.679 --rc geninfo_all_blocks=1 00:15:55.679 --rc geninfo_unexecuted_blocks=1 00:15:55.679 00:15:55.679 ' 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:55.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.679 --rc genhtml_branch_coverage=1 00:15:55.679 --rc genhtml_function_coverage=1 00:15:55.679 --rc genhtml_legend=1 00:15:55.679 --rc geninfo_all_blocks=1 00:15:55.679 --rc geninfo_unexecuted_blocks=1 00:15:55.679 00:15:55.679 ' 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:55.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3478533 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3478533' 00:15:55.679 Process pid: 3478533 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3478533 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 3478533 ']' 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:55.679 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:55.679 [2024-11-20 07:14:17.906909] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:15:55.679 [2024-11-20 07:14:17.906984] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.939 [2024-11-20 07:14:17.996780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:55.939 [2024-11-20 07:14:18.031030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.939 [2024-11-20 07:14:18.031063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.939 [2024-11-20 07:14:18.031070] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.939 [2024-11-20 07:14:18.031074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.939 [2024-11-20 07:14:18.031078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.939 [2024-11-20 07:14:18.032211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.939 [2024-11-20 07:14:18.032570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.939 [2024-11-20 07:14:18.032571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.506 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:56.506 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:15:56.506 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:57.452 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:57.452 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:57.452 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:57.452 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.452 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:57.712 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.712 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:57.713 malloc0 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.713 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:57.713 00:15:57.713 00:15:57.713 CUnit - A unit testing framework for C - Version 2.1-3 00:15:57.713 http://cunit.sourceforge.net/ 00:15:57.713 00:15:57.713 00:15:57.713 Suite: nvme_compliance 00:15:57.713 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 07:14:19.960533] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.713 [2024-11-20 07:14:19.961809] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:57.713 [2024-11-20 07:14:19.961819] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:57.713 [2024-11-20 07:14:19.961824] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:57.713 [2024-11-20 07:14:19.963545] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.973 passed 00:15:57.973 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 07:14:20.040172] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.973 [2024-11-20 07:14:20.043189] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.973 passed 00:15:57.973 Test: admin_identify_ns ...[2024-11-20 07:14:20.120780] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.973 [2024-11-20 07:14:20.180169] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:57.973 [2024-11-20 07:14:20.188165] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:57.973 [2024-11-20 07:14:20.209249] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.973 passed 00:15:58.232 Test: admin_get_features_mandatory_features ...[2024-11-20 07:14:20.285322] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.232 [2024-11-20 07:14:20.288343] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.232 passed 00:15:58.232 Test: admin_get_features_optional_features ...[2024-11-20 07:14:20.365848] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.232 [2024-11-20 07:14:20.368868] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.232 passed 00:15:58.232 Test: admin_set_features_number_of_queues ...[2024-11-20 07:14:20.443586] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.491 [2024-11-20 07:14:20.548245] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.491 passed 00:15:58.491 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 07:14:20.621432] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.491 [2024-11-20 07:14:20.624451] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.491 passed 00:15:58.491 Test: admin_get_log_page_with_lpo ...[2024-11-20 07:14:20.701326] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.750 [2024-11-20 07:14:20.769165] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:58.750 [2024-11-20 07:14:20.782212] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.750 passed 00:15:58.750 Test: fabric_property_get ...[2024-11-20 07:14:20.857267] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.750 [2024-11-20 07:14:20.858466] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:58.750 [2024-11-20 07:14:20.860288] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.750 passed 00:15:58.750 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 07:14:20.938764] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.750 [2024-11-20 07:14:20.939969] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:58.750 [2024-11-20 07:14:20.941788] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.750 passed 00:15:58.750 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 07:14:21.016504] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.010 [2024-11-20 07:14:21.101163] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:59.010 [2024-11-20 07:14:21.117163] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:59.010 [2024-11-20 07:14:21.122245] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.010 passed 00:15:59.010 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 07:14:21.195983] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.010 [2024-11-20 07:14:21.197195] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:59.010 [2024-11-20 07:14:21.198999] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.010 passed 00:15:59.010 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 07:14:21.274725] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.270 [2024-11-20 07:14:21.350168] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:59.270 [2024-11-20 07:14:21.374170] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:59.270 [2024-11-20 07:14:21.379230] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.270 passed 00:15:59.270 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 07:14:21.454723] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.270 [2024-11-20 07:14:21.455918] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:59.270 [2024-11-20 07:14:21.455937] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:59.270 [2024-11-20 07:14:21.457738] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.270 passed 00:15:59.270 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 07:14:21.534471] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.529 [2024-11-20 07:14:21.627168] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:59.529 [2024-11-20 07:14:21.635164] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:59.529 [2024-11-20 07:14:21.643165] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:59.529 [2024-11-20 07:14:21.650173] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:59.529 [2024-11-20 07:14:21.679229] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.529 passed 00:15:59.529 Test: admin_create_io_sq_verify_pc ...[2024-11-20 07:14:21.758772] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.529 [2024-11-20 07:14:21.779170] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:59.529 [2024-11-20 07:14:21.796930] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.787 passed 00:15:59.787 Test: admin_create_io_qp_max_qps ...[2024-11-20 07:14:21.872382] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.738 [2024-11-20 07:14:22.982168] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:01.307 [2024-11-20 07:14:23.354283] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:01.307 passed 00:16:01.307 Test: admin_create_io_sq_shared_cq ...[2024-11-20 07:14:23.432050] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:01.307 [2024-11-20 07:14:23.563163] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:01.568 [2024-11-20 07:14:23.600214] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:01.568 passed 00:16:01.568 00:16:01.568 Run Summary: Type Total Ran Passed Failed Inactive 00:16:01.568 suites 1 1 n/a 0 0 00:16:01.568 tests 18 18 18 0 0 00:16:01.568 asserts 360 360 360 0 n/a 00:16:01.568 00:16:01.568 Elapsed time = 1.497 seconds 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3478533 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 3478533 ']' 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 3478533 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3478533 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3478533' 00:16:01.568 killing process with pid 3478533 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 3478533 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 3478533 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:01.568 00:16:01.568 real 0m6.211s 00:16:01.568 user 0m17.595s 00:16:01.568 sys 0m0.537s 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:01.568 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:01.568 ************************************ 00:16:01.568 END TEST nvmf_vfio_user_nvme_compliance 00:16:01.568 ************************************ 00:16:01.827 07:14:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:01.827 07:14:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:01.827 07:14:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:01.827 07:14:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:01.827 ************************************ 00:16:01.827 START TEST nvmf_vfio_user_fuzz 00:16:01.827 ************************************ 00:16:01.827 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:01.827 * Looking for test storage... 00:16:01.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.827 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:01.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.827 --rc genhtml_branch_coverage=1 00:16:01.827 --rc genhtml_function_coverage=1 00:16:01.827 --rc genhtml_legend=1 00:16:01.827 --rc geninfo_all_blocks=1 00:16:01.827 --rc geninfo_unexecuted_blocks=1 00:16:01.827 00:16:01.827 ' 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:01.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.827 --rc genhtml_branch_coverage=1 00:16:01.827 --rc genhtml_function_coverage=1 00:16:01.827 --rc genhtml_legend=1 00:16:01.827 --rc geninfo_all_blocks=1 00:16:01.827 --rc geninfo_unexecuted_blocks=1 00:16:01.827 00:16:01.827 ' 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:01.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.827 --rc genhtml_branch_coverage=1 00:16:01.827 --rc genhtml_function_coverage=1 00:16:01.827 --rc genhtml_legend=1 00:16:01.827 --rc geninfo_all_blocks=1 00:16:01.827 --rc geninfo_unexecuted_blocks=1 00:16:01.827 00:16:01.827 ' 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:01.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.827 --rc genhtml_branch_coverage=1 00:16:01.827 --rc genhtml_function_coverage=1 00:16:01.827 --rc genhtml_legend=1 00:16:01.827 --rc geninfo_all_blocks=1 00:16:01.827 --rc geninfo_unexecuted_blocks=1 00:16:01.827 00:16:01.827 ' 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.827 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:02.086 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.086 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.086 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.086 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.086 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.086 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.086 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.086 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.086 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.086 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:02.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3479799 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3479799' 00:16:02.087 Process pid: 3479799 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3479799 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 3479799 ']' 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:02.087 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.025 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:03.025 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:16:03.025 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:03.963 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:03.963 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.963 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.963 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.963 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:03.963 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.963 malloc0 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:03.963 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:36.244 Fuzzing completed. Shutting down the fuzz application 00:16:36.244 00:16:36.244 Dumping successful admin opcodes: 00:16:36.244 8, 9, 10, 24, 00:16:36.244 Dumping successful io opcodes: 00:16:36.244 0, 00:16:36.244 NS: 0x20000081ef00 I/O qp, Total commands completed: 1223215, total successful commands: 4795, random_seed: 2806925312 00:16:36.244 NS: 0x20000081ef00 admin qp, Total commands completed: 258323, total successful commands: 2084, random_seed: 369028032 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3479799 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 3479799 ']' 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 3479799 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3479799 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3479799' 00:16:36.244 killing process with pid 3479799 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 3479799 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 3479799 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:36.244 00:16:36.244 real 0m32.771s 00:16:36.244 user 0m34.911s 00:16:36.244 sys 0m25.799s 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:36.244 ************************************ 00:16:36.244 END TEST nvmf_vfio_user_fuzz 00:16:36.244 ************************************ 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:36.244 ************************************ 00:16:36.244 START TEST nvmf_auth_target 00:16:36.244 ************************************ 00:16:36.244 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:36.244 * Looking for test storage... 00:16:36.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:36.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.245 --rc genhtml_branch_coverage=1 00:16:36.245 --rc genhtml_function_coverage=1 00:16:36.245 --rc genhtml_legend=1 00:16:36.245 --rc geninfo_all_blocks=1 00:16:36.245 --rc geninfo_unexecuted_blocks=1 00:16:36.245 00:16:36.245 ' 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:36.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.245 --rc genhtml_branch_coverage=1 00:16:36.245 --rc genhtml_function_coverage=1 00:16:36.245 --rc genhtml_legend=1 00:16:36.245 --rc geninfo_all_blocks=1 00:16:36.245 --rc geninfo_unexecuted_blocks=1 00:16:36.245 00:16:36.245 ' 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:36.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.245 --rc genhtml_branch_coverage=1 00:16:36.245 --rc genhtml_function_coverage=1 00:16:36.245 --rc genhtml_legend=1 00:16:36.245 --rc geninfo_all_blocks=1 00:16:36.245 --rc geninfo_unexecuted_blocks=1 00:16:36.245 00:16:36.245 ' 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:36.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.245 --rc genhtml_branch_coverage=1 00:16:36.245 --rc genhtml_function_coverage=1 00:16:36.245 --rc genhtml_legend=1 00:16:36.245 --rc geninfo_all_blocks=1 00:16:36.245 --rc geninfo_unexecuted_blocks=1 00:16:36.245 00:16:36.245 ' 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:36.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:36.245 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:36.246 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:42.826 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:42.827 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:42.827 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:42.827 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:42.827 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:42.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:16:42.827 00:16:42.827 --- 10.0.0.2 ping statistics --- 00:16:42.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.827 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:42.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:16:42.827 00:16:42.827 --- 10.0.0.1 ping statistics --- 00:16:42.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.827 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:42.827 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3489910 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3489910 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3489910 ']' 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:42.828 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3490244 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cb2f180785ed6100d3ebee559bab6858a517371a1f9a10ae 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cAU 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cb2f180785ed6100d3ebee559bab6858a517371a1f9a10ae 0 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cb2f180785ed6100d3ebee559bab6858a517371a1f9a10ae 0 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cb2f180785ed6100d3ebee559bab6858a517371a1f9a10ae 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cAU 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cAU 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.cAU 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cb506113118cec54a0a6e8ed6995f200aa67f13c65f31115b83d9954c64a61b6 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.QYF 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cb506113118cec54a0a6e8ed6995f200aa67f13c65f31115b83d9954c64a61b6 3 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cb506113118cec54a0a6e8ed6995f200aa67f13c65f31115b83d9954c64a61b6 3 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cb506113118cec54a0a6e8ed6995f200aa67f13c65f31115b83d9954c64a61b6 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.QYF 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.QYF 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.QYF 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e056b06fde131f126aac634b58ef64d2 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JhY 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e056b06fde131f126aac634b58ef64d2 1 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e056b06fde131f126aac634b58ef64d2 1 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e056b06fde131f126aac634b58ef64d2 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JhY 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JhY 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.JhY 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=85c86ec779c43675e41db71dbde61e36fdc4a67e568ab115 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.q2T 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 85c86ec779c43675e41db71dbde61e36fdc4a67e568ab115 2 00:16:43.398 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 85c86ec779c43675e41db71dbde61e36fdc4a67e568ab115 2 00:16:43.399 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:43.399 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:43.399 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=85c86ec779c43675e41db71dbde61e36fdc4a67e568ab115 00:16:43.399 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:43.399 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.q2T 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.q2T 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.q2T 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=efac99d2718a5815533532dde364d89e7592ef247c601949 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fC4 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key efac99d2718a5815533532dde364d89e7592ef247c601949 2 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 efac99d2718a5815533532dde364d89e7592ef247c601949 2 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=efac99d2718a5815533532dde364d89e7592ef247c601949 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fC4 00:16:43.659 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fC4 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.fC4 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=21e930efdcf204e0feec38d7badb1a98 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZFh 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 21e930efdcf204e0feec38d7badb1a98 1 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 21e930efdcf204e0feec38d7badb1a98 1 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=21e930efdcf204e0feec38d7badb1a98 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZFh 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZFh 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ZFh 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3b2fbc757e17233c7d188e02ba554aaac965510dc698587fca4520cbb416d53b 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.r0j 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3b2fbc757e17233c7d188e02ba554aaac965510dc698587fca4520cbb416d53b 3 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3b2fbc757e17233c7d188e02ba554aaac965510dc698587fca4520cbb416d53b 3 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3b2fbc757e17233c7d188e02ba554aaac965510dc698587fca4520cbb416d53b 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.r0j 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.r0j 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.r0j 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3489910 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3489910 ']' 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:43.660 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.920 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:43.920 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:43.920 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3490244 /var/tmp/host.sock 00:16:43.920 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3490244 ']' 00:16:43.920 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:16:43.920 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:43.920 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:43.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:43.920 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:43.920 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.180 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:44.180 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:44.180 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:44.180 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.180 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.180 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.180 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:44.180 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cAU 00:16:44.180 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.180 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.180 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.180 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.cAU 00:16:44.180 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.cAU 00:16:44.441 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.QYF ]] 00:16:44.441 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QYF 00:16:44.441 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.441 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.441 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.441 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QYF 00:16:44.441 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QYF 00:16:44.441 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:44.441 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JhY 00:16:44.441 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.441 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.701 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.701 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.JhY 00:16:44.701 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.JhY 00:16:44.701 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.q2T ]] 00:16:44.701 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.q2T 00:16:44.701 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.701 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.701 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.701 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.q2T 00:16:44.701 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.q2T 00:16:44.961 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:44.961 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fC4 00:16:44.961 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.961 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.961 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.961 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.fC4 00:16:44.961 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.fC4 00:16:45.222 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ZFh ]] 00:16:45.222 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZFh 00:16:45.222 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.222 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.222 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.222 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZFh 00:16:45.222 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZFh 00:16:45.222 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:45.222 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.r0j 00:16:45.222 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.222 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.222 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.222 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.r0j 00:16:45.222 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.r0j 00:16:45.482 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:45.482 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:45.482 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.482 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.483 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:45.483 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:45.743 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:45.743 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.743 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.743 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:45.743 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.743 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.743 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.743 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.743 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.743 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.743 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.743 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.743 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.004 00:16:46.004 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.004 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.004 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.004 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.004 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.004 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.004 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.264 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.264 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.264 { 00:16:46.264 "cntlid": 1, 00:16:46.264 "qid": 0, 00:16:46.264 "state": "enabled", 00:16:46.264 "thread": "nvmf_tgt_poll_group_000", 00:16:46.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:46.264 "listen_address": { 00:16:46.264 "trtype": "TCP", 00:16:46.264 "adrfam": "IPv4", 00:16:46.264 "traddr": "10.0.0.2", 00:16:46.264 "trsvcid": "4420" 00:16:46.264 }, 00:16:46.264 "peer_address": { 00:16:46.264 "trtype": "TCP", 00:16:46.264 "adrfam": "IPv4", 00:16:46.264 "traddr": "10.0.0.1", 00:16:46.264 "trsvcid": "38720" 00:16:46.264 }, 00:16:46.264 "auth": { 00:16:46.264 "state": "completed", 00:16:46.264 "digest": "sha256", 00:16:46.264 "dhgroup": "null" 00:16:46.264 } 00:16:46.264 } 00:16:46.264 ]' 00:16:46.264 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.264 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.264 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.264 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:46.264 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.264 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.264 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.264 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.525 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:16:46.525 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:16:47.093 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.093 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.093 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.093 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.093 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.093 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.093 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.093 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.352 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:47.352 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.352 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.352 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:47.352 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:47.352 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.352 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.352 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.352 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.352 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.352 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.352 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.352 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.612 00:16:47.612 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.612 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.612 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.612 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.612 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.612 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.612 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.612 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.612 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.612 { 00:16:47.612 "cntlid": 3, 00:16:47.612 "qid": 0, 00:16:47.612 "state": "enabled", 00:16:47.612 "thread": "nvmf_tgt_poll_group_000", 00:16:47.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:47.612 "listen_address": { 00:16:47.612 "trtype": "TCP", 00:16:47.612 "adrfam": "IPv4", 00:16:47.612 "traddr": "10.0.0.2", 00:16:47.612 "trsvcid": "4420" 00:16:47.612 }, 00:16:47.612 "peer_address": { 00:16:47.612 "trtype": "TCP", 00:16:47.612 "adrfam": "IPv4", 00:16:47.612 "traddr": "10.0.0.1", 00:16:47.612 "trsvcid": "38746" 00:16:47.612 }, 00:16:47.612 "auth": { 00:16:47.612 "state": "completed", 00:16:47.612 "digest": "sha256", 00:16:47.612 "dhgroup": "null" 00:16:47.612 } 00:16:47.612 } 00:16:47.612 ]' 00:16:47.612 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.871 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.871 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.871 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:47.871 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.871 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.871 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.871 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.130 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:16:48.130 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:16:48.701 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.701 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.701 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.701 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.702 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.702 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.702 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:48.702 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:48.963 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:48.963 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.963 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.963 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:48.963 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:48.963 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.963 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.963 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.963 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.963 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.963 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.963 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.963 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.963 00:16:49.223 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.223 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.223 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.223 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.223 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.223 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.223 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.223 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.223 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.223 { 00:16:49.223 "cntlid": 5, 00:16:49.223 "qid": 0, 00:16:49.223 "state": "enabled", 00:16:49.223 "thread": "nvmf_tgt_poll_group_000", 00:16:49.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:49.223 "listen_address": { 00:16:49.223 "trtype": "TCP", 00:16:49.223 "adrfam": "IPv4", 00:16:49.223 "traddr": "10.0.0.2", 00:16:49.223 "trsvcid": "4420" 00:16:49.223 }, 00:16:49.223 "peer_address": { 00:16:49.223 "trtype": "TCP", 00:16:49.223 "adrfam": "IPv4", 00:16:49.223 "traddr": "10.0.0.1", 00:16:49.223 "trsvcid": "38774" 00:16:49.223 }, 00:16:49.223 "auth": { 00:16:49.223 "state": "completed", 00:16:49.223 "digest": "sha256", 00:16:49.223 "dhgroup": "null" 00:16:49.223 } 00:16:49.223 } 00:16:49.223 ]' 00:16:49.223 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.223 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.223 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.483 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:49.483 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.483 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.483 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.483 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.483 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:16:49.483 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.423 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.684 00:16:50.684 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.684 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.684 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.943 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.943 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.943 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.943 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.943 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.943 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.943 { 00:16:50.943 "cntlid": 7, 00:16:50.943 "qid": 0, 00:16:50.943 "state": "enabled", 00:16:50.943 "thread": "nvmf_tgt_poll_group_000", 00:16:50.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:50.943 "listen_address": { 00:16:50.943 "trtype": "TCP", 00:16:50.943 "adrfam": "IPv4", 00:16:50.943 "traddr": "10.0.0.2", 00:16:50.943 "trsvcid": "4420" 00:16:50.943 }, 00:16:50.943 "peer_address": { 00:16:50.943 "trtype": "TCP", 00:16:50.943 "adrfam": "IPv4", 00:16:50.943 "traddr": "10.0.0.1", 00:16:50.943 "trsvcid": "38794" 00:16:50.943 }, 00:16:50.943 "auth": { 00:16:50.943 "state": "completed", 00:16:50.943 "digest": "sha256", 00:16:50.943 "dhgroup": "null" 00:16:50.943 } 00:16:50.943 } 00:16:50.943 ]' 00:16:50.943 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.944 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.944 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.944 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:50.944 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.944 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.944 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.944 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.204 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:16:51.204 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:16:51.774 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.774 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.774 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.774 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.774 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.774 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.774 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.774 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:51.774 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:52.033 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:52.033 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.033 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.033 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:52.033 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:52.033 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.033 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.033 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.033 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.033 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.033 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.033 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.034 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.294 00:16:52.294 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.294 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.294 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.554 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.554 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.554 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.554 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.554 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.554 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.554 { 00:16:52.554 "cntlid": 9, 00:16:52.554 "qid": 0, 00:16:52.554 "state": "enabled", 00:16:52.554 "thread": "nvmf_tgt_poll_group_000", 00:16:52.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:52.554 "listen_address": { 00:16:52.554 "trtype": "TCP", 00:16:52.554 "adrfam": "IPv4", 00:16:52.554 "traddr": "10.0.0.2", 00:16:52.554 "trsvcid": "4420" 00:16:52.554 }, 00:16:52.554 "peer_address": { 00:16:52.554 "trtype": "TCP", 00:16:52.554 "adrfam": "IPv4", 00:16:52.554 "traddr": "10.0.0.1", 00:16:52.554 "trsvcid": "38812" 00:16:52.554 }, 00:16:52.554 "auth": { 00:16:52.555 "state": "completed", 00:16:52.555 "digest": "sha256", 00:16:52.555 "dhgroup": "ffdhe2048" 00:16:52.555 } 00:16:52.555 } 00:16:52.555 ]' 00:16:52.555 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.555 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.555 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.555 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:52.555 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.555 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.555 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.555 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.815 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:16:52.815 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:16:53.384 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.384 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:53.384 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.384 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.384 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.384 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.384 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:53.384 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:53.642 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:53.642 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.642 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.642 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:53.642 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:53.642 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.642 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.642 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.642 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.642 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.642 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.642 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.642 07:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.901 00:16:53.901 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.901 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.901 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.190 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.190 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.191 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.191 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.191 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.191 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.191 { 00:16:54.191 "cntlid": 11, 00:16:54.191 "qid": 0, 00:16:54.191 "state": "enabled", 00:16:54.191 "thread": "nvmf_tgt_poll_group_000", 00:16:54.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:54.191 "listen_address": { 00:16:54.191 "trtype": "TCP", 00:16:54.191 "adrfam": "IPv4", 00:16:54.191 "traddr": "10.0.0.2", 00:16:54.191 "trsvcid": "4420" 00:16:54.191 }, 00:16:54.191 "peer_address": { 00:16:54.191 "trtype": "TCP", 00:16:54.191 "adrfam": "IPv4", 00:16:54.191 "traddr": "10.0.0.1", 00:16:54.191 "trsvcid": "59444" 00:16:54.191 }, 00:16:54.191 "auth": { 00:16:54.191 "state": "completed", 00:16:54.191 "digest": "sha256", 00:16:54.191 "dhgroup": "ffdhe2048" 00:16:54.191 } 00:16:54.191 } 00:16:54.191 ]' 00:16:54.191 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.191 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.191 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.191 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:54.191 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.191 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.191 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.191 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.451 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:16:54.451 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:16:55.022 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.022 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:55.022 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.022 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.022 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.022 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.022 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:55.022 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:55.283 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:55.283 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.283 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.283 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:55.283 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:55.283 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.283 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.283 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.283 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.283 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.283 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.283 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.283 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.543 00:16:55.543 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.543 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.543 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.803 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.804 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.804 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.804 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.804 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.804 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.804 { 00:16:55.804 "cntlid": 13, 00:16:55.804 "qid": 0, 00:16:55.804 "state": "enabled", 00:16:55.804 "thread": "nvmf_tgt_poll_group_000", 00:16:55.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:55.804 "listen_address": { 00:16:55.804 "trtype": "TCP", 00:16:55.804 "adrfam": "IPv4", 00:16:55.804 "traddr": "10.0.0.2", 00:16:55.804 "trsvcid": "4420" 00:16:55.804 }, 00:16:55.804 "peer_address": { 00:16:55.804 "trtype": "TCP", 00:16:55.804 "adrfam": "IPv4", 00:16:55.804 "traddr": "10.0.0.1", 00:16:55.804 "trsvcid": "59458" 00:16:55.804 }, 00:16:55.804 "auth": { 00:16:55.804 "state": "completed", 00:16:55.804 "digest": "sha256", 00:16:55.804 "dhgroup": "ffdhe2048" 00:16:55.804 } 00:16:55.804 } 00:16:55.804 ]' 00:16:55.804 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.804 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.804 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.804 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.804 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.804 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.804 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.804 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.064 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:16:56.064 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:16:56.637 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.637 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.637 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.637 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.637 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.637 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.637 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:56.637 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:56.897 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:56.897 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.897 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.897 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:56.897 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.897 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.897 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:56.897 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.897 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.897 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.897 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.897 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.897 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.157 00:16:57.157 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.157 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.157 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.418 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.418 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.418 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.418 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.418 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.418 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.418 { 00:16:57.418 "cntlid": 15, 00:16:57.418 "qid": 0, 00:16:57.418 "state": "enabled", 00:16:57.418 "thread": "nvmf_tgt_poll_group_000", 00:16:57.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.418 "listen_address": { 00:16:57.418 "trtype": "TCP", 00:16:57.418 "adrfam": "IPv4", 00:16:57.418 "traddr": "10.0.0.2", 00:16:57.418 "trsvcid": "4420" 00:16:57.418 }, 00:16:57.418 "peer_address": { 00:16:57.418 "trtype": "TCP", 00:16:57.418 "adrfam": "IPv4", 00:16:57.418 "traddr": "10.0.0.1", 00:16:57.418 "trsvcid": "59480" 00:16:57.418 }, 00:16:57.418 "auth": { 00:16:57.418 "state": "completed", 00:16:57.418 "digest": "sha256", 00:16:57.418 "dhgroup": "ffdhe2048" 00:16:57.418 } 00:16:57.418 } 00:16:57.418 ]' 00:16:57.418 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.418 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.418 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.418 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:57.418 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.418 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.418 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.418 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.679 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:16:57.679 07:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:16:58.250 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.250 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.250 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.250 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.250 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.250 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.250 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.250 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:58.250 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:58.510 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:58.510 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.510 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.510 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:58.510 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.510 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.510 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.510 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.510 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.510 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.510 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.510 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.510 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.770 00:16:58.770 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.770 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.770 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.030 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.030 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.030 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.030 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.030 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.031 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.031 { 00:16:59.031 "cntlid": 17, 00:16:59.031 "qid": 0, 00:16:59.031 "state": "enabled", 00:16:59.031 "thread": "nvmf_tgt_poll_group_000", 00:16:59.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:59.031 "listen_address": { 00:16:59.031 "trtype": "TCP", 00:16:59.031 "adrfam": "IPv4", 00:16:59.031 "traddr": "10.0.0.2", 00:16:59.031 "trsvcid": "4420" 00:16:59.031 }, 00:16:59.031 "peer_address": { 00:16:59.031 "trtype": "TCP", 00:16:59.031 "adrfam": "IPv4", 00:16:59.031 "traddr": "10.0.0.1", 00:16:59.031 "trsvcid": "59498" 00:16:59.031 }, 00:16:59.031 "auth": { 00:16:59.031 "state": "completed", 00:16:59.031 "digest": "sha256", 00:16:59.031 "dhgroup": "ffdhe3072" 00:16:59.031 } 00:16:59.031 } 00:16:59.031 ]' 00:16:59.031 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.031 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.031 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.031 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:59.031 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.031 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.031 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.031 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.291 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:16:59.291 07:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:16:59.862 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.862 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.862 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.862 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.862 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.862 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.862 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:59.862 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:00.123 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:00.123 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.123 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.123 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:00.123 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.123 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.123 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.123 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.123 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.123 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.123 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.123 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.123 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.384 00:17:00.384 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.384 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.384 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.645 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.645 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.645 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.645 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.645 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.645 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.645 { 00:17:00.645 "cntlid": 19, 00:17:00.645 "qid": 0, 00:17:00.645 "state": "enabled", 00:17:00.645 "thread": "nvmf_tgt_poll_group_000", 00:17:00.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:00.645 "listen_address": { 00:17:00.645 "trtype": "TCP", 00:17:00.645 "adrfam": "IPv4", 00:17:00.645 "traddr": "10.0.0.2", 00:17:00.645 "trsvcid": "4420" 00:17:00.645 }, 00:17:00.645 "peer_address": { 00:17:00.645 "trtype": "TCP", 00:17:00.645 "adrfam": "IPv4", 00:17:00.645 "traddr": "10.0.0.1", 00:17:00.645 "trsvcid": "59514" 00:17:00.645 }, 00:17:00.645 "auth": { 00:17:00.645 "state": "completed", 00:17:00.645 "digest": "sha256", 00:17:00.645 "dhgroup": "ffdhe3072" 00:17:00.645 } 00:17:00.645 } 00:17:00.645 ]' 00:17:00.645 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.645 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.645 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.645 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:00.645 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.645 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.645 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.645 07:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.906 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:00.906 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:01.478 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.478 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.478 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.478 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.478 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.478 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.478 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:01.479 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:01.739 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:01.739 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.739 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.739 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:01.739 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:01.739 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.739 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.739 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.739 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.739 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.739 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.739 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.739 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.998 00:17:01.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.258 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.258 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.258 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.258 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.258 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.258 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.258 { 00:17:02.258 "cntlid": 21, 00:17:02.258 "qid": 0, 00:17:02.258 "state": "enabled", 00:17:02.258 "thread": "nvmf_tgt_poll_group_000", 00:17:02.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.258 "listen_address": { 00:17:02.258 "trtype": "TCP", 00:17:02.258 "adrfam": "IPv4", 00:17:02.258 "traddr": "10.0.0.2", 00:17:02.258 "trsvcid": "4420" 00:17:02.258 }, 00:17:02.258 "peer_address": { 00:17:02.258 "trtype": "TCP", 00:17:02.258 "adrfam": "IPv4", 00:17:02.258 "traddr": "10.0.0.1", 00:17:02.258 "trsvcid": "59548" 00:17:02.258 }, 00:17:02.258 "auth": { 00:17:02.258 "state": "completed", 00:17:02.258 "digest": "sha256", 00:17:02.258 "dhgroup": "ffdhe3072" 00:17:02.258 } 00:17:02.258 } 00:17:02.258 ]' 00:17:02.258 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.258 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.258 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.258 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.258 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.258 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.258 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.258 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.518 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:02.518 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:03.086 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.086 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.086 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.086 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.086 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.086 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.086 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:03.086 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:03.346 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:03.346 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.346 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.347 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:03.347 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.347 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.347 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:03.347 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.347 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.347 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.347 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.347 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.347 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.607 00:17:03.607 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.607 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.607 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.866 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.866 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.866 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.867 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.867 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.867 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.867 { 00:17:03.867 "cntlid": 23, 00:17:03.867 "qid": 0, 00:17:03.867 "state": "enabled", 00:17:03.867 "thread": "nvmf_tgt_poll_group_000", 00:17:03.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:03.867 "listen_address": { 00:17:03.867 "trtype": "TCP", 00:17:03.867 "adrfam": "IPv4", 00:17:03.867 "traddr": "10.0.0.2", 00:17:03.867 "trsvcid": "4420" 00:17:03.867 }, 00:17:03.867 "peer_address": { 00:17:03.867 "trtype": "TCP", 00:17:03.867 "adrfam": "IPv4", 00:17:03.867 "traddr": "10.0.0.1", 00:17:03.867 "trsvcid": "34874" 00:17:03.867 }, 00:17:03.867 "auth": { 00:17:03.867 "state": "completed", 00:17:03.867 "digest": "sha256", 00:17:03.867 "dhgroup": "ffdhe3072" 00:17:03.867 } 00:17:03.867 } 00:17:03.867 ]' 00:17:03.867 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.867 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.867 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.867 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:03.867 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.867 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.867 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.867 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.126 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:04.126 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:04.697 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.697 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.697 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.697 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.697 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.697 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.697 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.697 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:04.697 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:04.958 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:04.958 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.958 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:04.958 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:04.958 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:04.958 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.958 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.958 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.958 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.958 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.958 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.958 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.958 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.218 00:17:05.218 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.218 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.218 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.478 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.478 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.478 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.478 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.478 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.478 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.478 { 00:17:05.478 "cntlid": 25, 00:17:05.478 "qid": 0, 00:17:05.478 "state": "enabled", 00:17:05.478 "thread": "nvmf_tgt_poll_group_000", 00:17:05.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:05.478 "listen_address": { 00:17:05.478 "trtype": "TCP", 00:17:05.478 "adrfam": "IPv4", 00:17:05.478 "traddr": "10.0.0.2", 00:17:05.478 "trsvcid": "4420" 00:17:05.478 }, 00:17:05.478 "peer_address": { 00:17:05.478 "trtype": "TCP", 00:17:05.478 "adrfam": "IPv4", 00:17:05.478 "traddr": "10.0.0.1", 00:17:05.478 "trsvcid": "34896" 00:17:05.478 }, 00:17:05.478 "auth": { 00:17:05.478 "state": "completed", 00:17:05.478 "digest": "sha256", 00:17:05.478 "dhgroup": "ffdhe4096" 00:17:05.478 } 00:17:05.478 } 00:17:05.478 ]' 00:17:05.478 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.478 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.478 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.478 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:05.478 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.478 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.478 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.478 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.738 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:05.738 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:06.308 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.308 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.308 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.308 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.308 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.308 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.308 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:06.308 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:06.569 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:06.569 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.569 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.569 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:06.569 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:06.569 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.569 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.569 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.569 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.569 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.569 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.569 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.569 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.829 00:17:06.829 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.829 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.829 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.090 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.090 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.090 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.090 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.090 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.090 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.090 { 00:17:07.090 "cntlid": 27, 00:17:07.090 "qid": 0, 00:17:07.090 "state": "enabled", 00:17:07.090 "thread": "nvmf_tgt_poll_group_000", 00:17:07.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:07.090 "listen_address": { 00:17:07.090 "trtype": "TCP", 00:17:07.090 "adrfam": "IPv4", 00:17:07.090 "traddr": "10.0.0.2", 00:17:07.090 "trsvcid": "4420" 00:17:07.090 }, 00:17:07.090 "peer_address": { 00:17:07.090 "trtype": "TCP", 00:17:07.090 "adrfam": "IPv4", 00:17:07.090 "traddr": "10.0.0.1", 00:17:07.090 "trsvcid": "34912" 00:17:07.090 }, 00:17:07.090 "auth": { 00:17:07.090 "state": "completed", 00:17:07.090 "digest": "sha256", 00:17:07.090 "dhgroup": "ffdhe4096" 00:17:07.090 } 00:17:07.090 } 00:17:07.090 ]' 00:17:07.090 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.090 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.090 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.090 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:07.090 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.090 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.090 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.090 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.351 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:07.351 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:07.922 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.922 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.922 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.922 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.922 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.922 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.922 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:07.922 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:08.183 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:08.183 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.183 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:08.183 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:08.183 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:08.183 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.183 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.183 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.183 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.183 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.183 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.183 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.183 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.443 00:17:08.443 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.443 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.443 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.703 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.703 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.703 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.703 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.703 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.703 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.703 { 00:17:08.703 "cntlid": 29, 00:17:08.703 "qid": 0, 00:17:08.703 "state": "enabled", 00:17:08.703 "thread": "nvmf_tgt_poll_group_000", 00:17:08.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.703 "listen_address": { 00:17:08.703 "trtype": "TCP", 00:17:08.703 "adrfam": "IPv4", 00:17:08.703 "traddr": "10.0.0.2", 00:17:08.703 "trsvcid": "4420" 00:17:08.703 }, 00:17:08.703 "peer_address": { 00:17:08.703 "trtype": "TCP", 00:17:08.703 "adrfam": "IPv4", 00:17:08.703 "traddr": "10.0.0.1", 00:17:08.703 "trsvcid": "34940" 00:17:08.703 }, 00:17:08.703 "auth": { 00:17:08.703 "state": "completed", 00:17:08.703 "digest": "sha256", 00:17:08.703 "dhgroup": "ffdhe4096" 00:17:08.703 } 00:17:08.703 } 00:17:08.703 ]' 00:17:08.703 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.703 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.703 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.703 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:08.703 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.963 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.963 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.963 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.963 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:08.963 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:09.903 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.903 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.903 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.903 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.903 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.903 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.903 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:09.903 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:09.903 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:09.903 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.903 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.903 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:09.903 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:09.903 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.903 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:09.903 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.903 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.903 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.903 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.903 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.903 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.163 00:17:10.163 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.163 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.163 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.424 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.424 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.424 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.424 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.424 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.424 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.424 { 00:17:10.424 "cntlid": 31, 00:17:10.424 "qid": 0, 00:17:10.424 "state": "enabled", 00:17:10.424 "thread": "nvmf_tgt_poll_group_000", 00:17:10.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.424 "listen_address": { 00:17:10.424 "trtype": "TCP", 00:17:10.424 "adrfam": "IPv4", 00:17:10.424 "traddr": "10.0.0.2", 00:17:10.424 "trsvcid": "4420" 00:17:10.424 }, 00:17:10.424 "peer_address": { 00:17:10.424 "trtype": "TCP", 00:17:10.424 "adrfam": "IPv4", 00:17:10.424 "traddr": "10.0.0.1", 00:17:10.424 "trsvcid": "34974" 00:17:10.424 }, 00:17:10.424 "auth": { 00:17:10.424 "state": "completed", 00:17:10.424 "digest": "sha256", 00:17:10.424 "dhgroup": "ffdhe4096" 00:17:10.424 } 00:17:10.424 } 00:17:10.424 ]' 00:17:10.424 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.424 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.424 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.424 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.424 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.424 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.424 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.424 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.685 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:10.685 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:11.256 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.256 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.256 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.256 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.256 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.256 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.256 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.256 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.256 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.516 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:11.516 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.516 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:11.516 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:11.516 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:11.516 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.517 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.517 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.517 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.517 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.517 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.517 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.517 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.778 00:17:11.778 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.778 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.778 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.039 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.039 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.039 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.039 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.039 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.039 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.039 { 00:17:12.039 "cntlid": 33, 00:17:12.039 "qid": 0, 00:17:12.039 "state": "enabled", 00:17:12.039 "thread": "nvmf_tgt_poll_group_000", 00:17:12.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:12.039 "listen_address": { 00:17:12.039 "trtype": "TCP", 00:17:12.039 "adrfam": "IPv4", 00:17:12.039 "traddr": "10.0.0.2", 00:17:12.039 "trsvcid": "4420" 00:17:12.039 }, 00:17:12.039 "peer_address": { 00:17:12.039 "trtype": "TCP", 00:17:12.039 "adrfam": "IPv4", 00:17:12.039 "traddr": "10.0.0.1", 00:17:12.039 "trsvcid": "35014" 00:17:12.039 }, 00:17:12.039 "auth": { 00:17:12.039 "state": "completed", 00:17:12.039 "digest": "sha256", 00:17:12.039 "dhgroup": "ffdhe6144" 00:17:12.039 } 00:17:12.039 } 00:17:12.039 ]' 00:17:12.039 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.039 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.039 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.039 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:12.039 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.039 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.039 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.039 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.300 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:12.300 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:12.870 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.870 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.870 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.870 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.131 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.391 00:17:13.391 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.391 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.652 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.652 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.652 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.652 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.652 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.652 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.652 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.652 { 00:17:13.652 "cntlid": 35, 00:17:13.652 "qid": 0, 00:17:13.652 "state": "enabled", 00:17:13.652 "thread": "nvmf_tgt_poll_group_000", 00:17:13.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.652 "listen_address": { 00:17:13.652 "trtype": "TCP", 00:17:13.652 "adrfam": "IPv4", 00:17:13.652 "traddr": "10.0.0.2", 00:17:13.652 "trsvcid": "4420" 00:17:13.652 }, 00:17:13.652 "peer_address": { 00:17:13.652 "trtype": "TCP", 00:17:13.652 "adrfam": "IPv4", 00:17:13.652 "traddr": "10.0.0.1", 00:17:13.652 "trsvcid": "59112" 00:17:13.652 }, 00:17:13.652 "auth": { 00:17:13.652 "state": "completed", 00:17:13.652 "digest": "sha256", 00:17:13.652 "dhgroup": "ffdhe6144" 00:17:13.652 } 00:17:13.652 } 00:17:13.652 ]' 00:17:13.652 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.652 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.652 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.912 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:13.912 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.913 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.913 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.913 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.913 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:13.913 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:14.854 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.854 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.854 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.854 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.854 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.854 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.854 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:14.854 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:14.854 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:14.854 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.854 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.854 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:14.854 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:14.854 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.854 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.854 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.854 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.854 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.854 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.854 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.854 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.114 00:17:15.114 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.114 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.114 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.375 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.375 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.375 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.375 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.375 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.375 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.375 { 00:17:15.375 "cntlid": 37, 00:17:15.375 "qid": 0, 00:17:15.375 "state": "enabled", 00:17:15.375 "thread": "nvmf_tgt_poll_group_000", 00:17:15.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:15.375 "listen_address": { 00:17:15.375 "trtype": "TCP", 00:17:15.375 "adrfam": "IPv4", 00:17:15.375 "traddr": "10.0.0.2", 00:17:15.375 "trsvcid": "4420" 00:17:15.375 }, 00:17:15.375 "peer_address": { 00:17:15.375 "trtype": "TCP", 00:17:15.375 "adrfam": "IPv4", 00:17:15.375 "traddr": "10.0.0.1", 00:17:15.375 "trsvcid": "59136" 00:17:15.375 }, 00:17:15.375 "auth": { 00:17:15.375 "state": "completed", 00:17:15.375 "digest": "sha256", 00:17:15.375 "dhgroup": "ffdhe6144" 00:17:15.375 } 00:17:15.375 } 00:17:15.375 ]' 00:17:15.375 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.375 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.375 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.636 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:15.636 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.636 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.636 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.636 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.636 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:15.636 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.651 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.967 00:17:16.967 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.967 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.967 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.249 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.249 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.249 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.249 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.249 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.249 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.249 { 00:17:17.249 "cntlid": 39, 00:17:17.249 "qid": 0, 00:17:17.249 "state": "enabled", 00:17:17.249 "thread": "nvmf_tgt_poll_group_000", 00:17:17.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:17.249 "listen_address": { 00:17:17.249 "trtype": "TCP", 00:17:17.249 "adrfam": "IPv4", 00:17:17.249 "traddr": "10.0.0.2", 00:17:17.249 "trsvcid": "4420" 00:17:17.249 }, 00:17:17.249 "peer_address": { 00:17:17.249 "trtype": "TCP", 00:17:17.249 "adrfam": "IPv4", 00:17:17.249 "traddr": "10.0.0.1", 00:17:17.249 "trsvcid": "59160" 00:17:17.249 }, 00:17:17.249 "auth": { 00:17:17.249 "state": "completed", 00:17:17.249 "digest": "sha256", 00:17:17.249 "dhgroup": "ffdhe6144" 00:17:17.249 } 00:17:17.249 } 00:17:17.249 ]' 00:17:17.249 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.249 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.249 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.249 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:17.249 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.249 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.249 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.249 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.510 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:17.510 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:18.080 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.080 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.080 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.080 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.080 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.080 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.080 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.080 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.080 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.340 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:18.340 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.340 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:18.340 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:18.340 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.340 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.340 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.340 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.340 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.340 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.340 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.340 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.340 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.909 00:17:18.909 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.909 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.909 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.909 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.909 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.909 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.910 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.910 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.910 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.910 { 00:17:18.910 "cntlid": 41, 00:17:18.910 "qid": 0, 00:17:18.910 "state": "enabled", 00:17:18.910 "thread": "nvmf_tgt_poll_group_000", 00:17:18.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.910 "listen_address": { 00:17:18.910 "trtype": "TCP", 00:17:18.910 "adrfam": "IPv4", 00:17:18.910 "traddr": "10.0.0.2", 00:17:18.910 "trsvcid": "4420" 00:17:18.910 }, 00:17:18.910 "peer_address": { 00:17:18.910 "trtype": "TCP", 00:17:18.910 "adrfam": "IPv4", 00:17:18.910 "traddr": "10.0.0.1", 00:17:18.910 "trsvcid": "59190" 00:17:18.910 }, 00:17:18.910 "auth": { 00:17:18.910 "state": "completed", 00:17:18.910 "digest": "sha256", 00:17:18.910 "dhgroup": "ffdhe8192" 00:17:18.910 } 00:17:18.910 } 00:17:18.910 ]' 00:17:18.910 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.910 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.910 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.170 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.170 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.170 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.170 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.170 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.431 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:19.431 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:20.001 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.001 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.001 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.001 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.001 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.001 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.001 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:20.001 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:20.261 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:20.261 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.261 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:20.261 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:20.261 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.261 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.261 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.261 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.261 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.261 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.261 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.261 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.261 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.522 00:17:20.522 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.522 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.522 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.783 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.783 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.783 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.783 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.783 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.783 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.783 { 00:17:20.783 "cntlid": 43, 00:17:20.783 "qid": 0, 00:17:20.783 "state": "enabled", 00:17:20.783 "thread": "nvmf_tgt_poll_group_000", 00:17:20.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:20.783 "listen_address": { 00:17:20.783 "trtype": "TCP", 00:17:20.783 "adrfam": "IPv4", 00:17:20.783 "traddr": "10.0.0.2", 00:17:20.783 "trsvcid": "4420" 00:17:20.783 }, 00:17:20.783 "peer_address": { 00:17:20.783 "trtype": "TCP", 00:17:20.783 "adrfam": "IPv4", 00:17:20.783 "traddr": "10.0.0.1", 00:17:20.783 "trsvcid": "59200" 00:17:20.783 }, 00:17:20.783 "auth": { 00:17:20.783 "state": "completed", 00:17:20.783 "digest": "sha256", 00:17:20.783 "dhgroup": "ffdhe8192" 00:17:20.783 } 00:17:20.783 } 00:17:20.783 ]' 00:17:20.783 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.783 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.783 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.783 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:20.783 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.044 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.044 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.044 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.044 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:21.044 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:21.985 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.985 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.985 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.985 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.985 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.985 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.985 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:21.985 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:21.985 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:21.985 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.985 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:21.985 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:21.985 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.985 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.985 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.985 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.985 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.985 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.985 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.985 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.985 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.556 00:17:22.556 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.556 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.556 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.556 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.556 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.556 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.556 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.556 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.556 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.556 { 00:17:22.556 "cntlid": 45, 00:17:22.556 "qid": 0, 00:17:22.556 "state": "enabled", 00:17:22.556 "thread": "nvmf_tgt_poll_group_000", 00:17:22.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.556 "listen_address": { 00:17:22.556 "trtype": "TCP", 00:17:22.556 "adrfam": "IPv4", 00:17:22.556 "traddr": "10.0.0.2", 00:17:22.556 "trsvcid": "4420" 00:17:22.556 }, 00:17:22.556 "peer_address": { 00:17:22.556 "trtype": "TCP", 00:17:22.556 "adrfam": "IPv4", 00:17:22.556 "traddr": "10.0.0.1", 00:17:22.556 "trsvcid": "59232" 00:17:22.556 }, 00:17:22.556 "auth": { 00:17:22.556 "state": "completed", 00:17:22.556 "digest": "sha256", 00:17:22.556 "dhgroup": "ffdhe8192" 00:17:22.556 } 00:17:22.556 } 00:17:22.556 ]' 00:17:22.556 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.557 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.818 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.818 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:22.818 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.818 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.818 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.818 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.079 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:23.079 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:23.651 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.651 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.651 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.651 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.651 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.651 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.651 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:23.651 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:23.912 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:23.912 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.912 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:23.912 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:23.912 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.912 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.912 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:23.912 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.912 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.912 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.912 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.912 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.912 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.173 00:17:24.173 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.173 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.173 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.434 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.434 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.434 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.434 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.434 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.434 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.434 { 00:17:24.434 "cntlid": 47, 00:17:24.434 "qid": 0, 00:17:24.434 "state": "enabled", 00:17:24.434 "thread": "nvmf_tgt_poll_group_000", 00:17:24.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.434 "listen_address": { 00:17:24.434 "trtype": "TCP", 00:17:24.434 "adrfam": "IPv4", 00:17:24.434 "traddr": "10.0.0.2", 00:17:24.434 "trsvcid": "4420" 00:17:24.434 }, 00:17:24.434 "peer_address": { 00:17:24.434 "trtype": "TCP", 00:17:24.434 "adrfam": "IPv4", 00:17:24.434 "traddr": "10.0.0.1", 00:17:24.434 "trsvcid": "59476" 00:17:24.434 }, 00:17:24.434 "auth": { 00:17:24.434 "state": "completed", 00:17:24.434 "digest": "sha256", 00:17:24.434 "dhgroup": "ffdhe8192" 00:17:24.434 } 00:17:24.434 } 00:17:24.434 ]' 00:17:24.434 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.434 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.434 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.434 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.694 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.694 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.694 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.694 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.694 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:24.694 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:25.267 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.267 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.267 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.267 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.267 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.267 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:25.267 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.267 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.267 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.267 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.528 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:25.528 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.528 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.528 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:25.528 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.528 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.528 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.528 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.528 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.528 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.528 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.528 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.528 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.789 00:17:25.789 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.789 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.789 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.049 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.049 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.049 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.049 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.049 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.049 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.049 { 00:17:26.049 "cntlid": 49, 00:17:26.049 "qid": 0, 00:17:26.049 "state": "enabled", 00:17:26.049 "thread": "nvmf_tgt_poll_group_000", 00:17:26.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.049 "listen_address": { 00:17:26.049 "trtype": "TCP", 00:17:26.049 "adrfam": "IPv4", 00:17:26.049 "traddr": "10.0.0.2", 00:17:26.049 "trsvcid": "4420" 00:17:26.049 }, 00:17:26.049 "peer_address": { 00:17:26.049 "trtype": "TCP", 00:17:26.049 "adrfam": "IPv4", 00:17:26.049 "traddr": "10.0.0.1", 00:17:26.049 "trsvcid": "59500" 00:17:26.049 }, 00:17:26.049 "auth": { 00:17:26.049 "state": "completed", 00:17:26.049 "digest": "sha384", 00:17:26.049 "dhgroup": "null" 00:17:26.049 } 00:17:26.049 } 00:17:26.049 ]' 00:17:26.049 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.049 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.049 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.049 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:26.049 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.049 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.049 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.049 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.310 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:26.310 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:26.882 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.882 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.882 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.882 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.882 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.882 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.882 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:26.882 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:27.144 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:27.144 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.144 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.144 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:27.144 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:27.144 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.144 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.144 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.144 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.144 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.144 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.144 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.144 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.405 00:17:27.405 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.405 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.405 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.405 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.405 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.405 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.405 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.666 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.666 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.666 { 00:17:27.666 "cntlid": 51, 00:17:27.666 "qid": 0, 00:17:27.666 "state": "enabled", 00:17:27.666 "thread": "nvmf_tgt_poll_group_000", 00:17:27.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:27.666 "listen_address": { 00:17:27.666 "trtype": "TCP", 00:17:27.666 "adrfam": "IPv4", 00:17:27.666 "traddr": "10.0.0.2", 00:17:27.666 "trsvcid": "4420" 00:17:27.666 }, 00:17:27.666 "peer_address": { 00:17:27.666 "trtype": "TCP", 00:17:27.666 "adrfam": "IPv4", 00:17:27.666 "traddr": "10.0.0.1", 00:17:27.666 "trsvcid": "59532" 00:17:27.666 }, 00:17:27.666 "auth": { 00:17:27.666 "state": "completed", 00:17:27.666 "digest": "sha384", 00:17:27.666 "dhgroup": "null" 00:17:27.666 } 00:17:27.666 } 00:17:27.666 ]' 00:17:27.666 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.666 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.666 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.666 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:27.666 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.666 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.666 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.666 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.927 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:27.927 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:28.498 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.498 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.498 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.498 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.498 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.498 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.498 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:28.498 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:28.758 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:28.758 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.758 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.758 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:28.758 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.758 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.758 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.758 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.758 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.758 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.758 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.758 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.758 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.020 00:17:29.020 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.020 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.020 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.281 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.281 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.281 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.281 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.281 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.281 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.281 { 00:17:29.281 "cntlid": 53, 00:17:29.281 "qid": 0, 00:17:29.281 "state": "enabled", 00:17:29.281 "thread": "nvmf_tgt_poll_group_000", 00:17:29.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.281 "listen_address": { 00:17:29.281 "trtype": "TCP", 00:17:29.281 "adrfam": "IPv4", 00:17:29.281 "traddr": "10.0.0.2", 00:17:29.281 "trsvcid": "4420" 00:17:29.281 }, 00:17:29.281 "peer_address": { 00:17:29.281 "trtype": "TCP", 00:17:29.281 "adrfam": "IPv4", 00:17:29.281 "traddr": "10.0.0.1", 00:17:29.281 "trsvcid": "59554" 00:17:29.281 }, 00:17:29.281 "auth": { 00:17:29.281 "state": "completed", 00:17:29.281 "digest": "sha384", 00:17:29.281 "dhgroup": "null" 00:17:29.281 } 00:17:29.281 } 00:17:29.281 ]' 00:17:29.281 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.281 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.281 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.281 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:29.281 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.281 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.281 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.281 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.541 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:29.542 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:30.113 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.113 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.113 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.113 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.373 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.633 00:17:30.633 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.633 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.633 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.894 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.894 { 00:17:30.894 "cntlid": 55, 00:17:30.894 "qid": 0, 00:17:30.894 "state": "enabled", 00:17:30.894 "thread": "nvmf_tgt_poll_group_000", 00:17:30.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.894 "listen_address": { 00:17:30.894 "trtype": "TCP", 00:17:30.894 "adrfam": "IPv4", 00:17:30.894 "traddr": "10.0.0.2", 00:17:30.894 "trsvcid": "4420" 00:17:30.894 }, 00:17:30.894 "peer_address": { 00:17:30.894 "trtype": "TCP", 00:17:30.894 "adrfam": "IPv4", 00:17:30.894 "traddr": "10.0.0.1", 00:17:30.894 "trsvcid": "59584" 00:17:30.894 }, 00:17:30.894 "auth": { 00:17:30.894 "state": "completed", 00:17:30.894 "digest": "sha384", 00:17:30.894 "dhgroup": "null" 00:17:30.894 } 00:17:30.894 } 00:17:30.894 ]' 00:17:30.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:30.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.154 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:31.154 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:31.726 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.987 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.247 00:17:32.247 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.247 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.247 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.508 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.508 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.508 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.508 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.508 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.508 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.508 { 00:17:32.508 "cntlid": 57, 00:17:32.508 "qid": 0, 00:17:32.508 "state": "enabled", 00:17:32.508 "thread": "nvmf_tgt_poll_group_000", 00:17:32.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.508 "listen_address": { 00:17:32.508 "trtype": "TCP", 00:17:32.508 "adrfam": "IPv4", 00:17:32.508 "traddr": "10.0.0.2", 00:17:32.508 "trsvcid": "4420" 00:17:32.508 }, 00:17:32.508 "peer_address": { 00:17:32.508 "trtype": "TCP", 00:17:32.508 "adrfam": "IPv4", 00:17:32.508 "traddr": "10.0.0.1", 00:17:32.508 "trsvcid": "59618" 00:17:32.508 }, 00:17:32.508 "auth": { 00:17:32.508 "state": "completed", 00:17:32.508 "digest": "sha384", 00:17:32.508 "dhgroup": "ffdhe2048" 00:17:32.508 } 00:17:32.508 } 00:17:32.508 ]' 00:17:32.508 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.508 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.508 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.508 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:32.508 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.508 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.508 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.508 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.769 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:32.769 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:33.341 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.341 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.601 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.863 00:17:33.863 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.863 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.863 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.124 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.124 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.124 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.124 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.124 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.124 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.124 { 00:17:34.124 "cntlid": 59, 00:17:34.124 "qid": 0, 00:17:34.124 "state": "enabled", 00:17:34.124 "thread": "nvmf_tgt_poll_group_000", 00:17:34.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.124 "listen_address": { 00:17:34.124 "trtype": "TCP", 00:17:34.124 "adrfam": "IPv4", 00:17:34.124 "traddr": "10.0.0.2", 00:17:34.124 "trsvcid": "4420" 00:17:34.124 }, 00:17:34.124 "peer_address": { 00:17:34.124 "trtype": "TCP", 00:17:34.124 "adrfam": "IPv4", 00:17:34.124 "traddr": "10.0.0.1", 00:17:34.124 "trsvcid": "59484" 00:17:34.124 }, 00:17:34.124 "auth": { 00:17:34.124 "state": "completed", 00:17:34.124 "digest": "sha384", 00:17:34.124 "dhgroup": "ffdhe2048" 00:17:34.124 } 00:17:34.124 } 00:17:34.124 ]' 00:17:34.124 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.124 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.124 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.124 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.124 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.384 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.384 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.384 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.384 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:34.384 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.326 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.587 00:17:35.587 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.587 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.587 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.848 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.848 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.848 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.848 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.848 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.848 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.848 { 00:17:35.848 "cntlid": 61, 00:17:35.848 "qid": 0, 00:17:35.848 "state": "enabled", 00:17:35.848 "thread": "nvmf_tgt_poll_group_000", 00:17:35.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.848 "listen_address": { 00:17:35.848 "trtype": "TCP", 00:17:35.848 "adrfam": "IPv4", 00:17:35.848 "traddr": "10.0.0.2", 00:17:35.848 "trsvcid": "4420" 00:17:35.848 }, 00:17:35.848 "peer_address": { 00:17:35.848 "trtype": "TCP", 00:17:35.848 "adrfam": "IPv4", 00:17:35.848 "traddr": "10.0.0.1", 00:17:35.848 "trsvcid": "59506" 00:17:35.848 }, 00:17:35.848 "auth": { 00:17:35.848 "state": "completed", 00:17:35.848 "digest": "sha384", 00:17:35.848 "dhgroup": "ffdhe2048" 00:17:35.848 } 00:17:35.848 } 00:17:35.848 ]' 00:17:35.848 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.848 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.848 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.848 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:35.848 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.848 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.848 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.848 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.109 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:36.109 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:36.679 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.679 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.679 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.679 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.679 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.679 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.679 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.679 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.940 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:36.940 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.940 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.940 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:36.940 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.940 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.940 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:36.940 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.940 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.940 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.940 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.940 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.940 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.202 00:17:37.202 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.202 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.202 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.463 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.463 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.463 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.463 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.463 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.463 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.463 { 00:17:37.463 "cntlid": 63, 00:17:37.463 "qid": 0, 00:17:37.463 "state": "enabled", 00:17:37.463 "thread": "nvmf_tgt_poll_group_000", 00:17:37.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.463 "listen_address": { 00:17:37.463 "trtype": "TCP", 00:17:37.463 "adrfam": "IPv4", 00:17:37.463 "traddr": "10.0.0.2", 00:17:37.463 "trsvcid": "4420" 00:17:37.463 }, 00:17:37.463 "peer_address": { 00:17:37.463 "trtype": "TCP", 00:17:37.463 "adrfam": "IPv4", 00:17:37.463 "traddr": "10.0.0.1", 00:17:37.463 "trsvcid": "59530" 00:17:37.463 }, 00:17:37.463 "auth": { 00:17:37.463 "state": "completed", 00:17:37.463 "digest": "sha384", 00:17:37.463 "dhgroup": "ffdhe2048" 00:17:37.463 } 00:17:37.463 } 00:17:37.463 ]' 00:17:37.463 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.463 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.463 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.463 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:37.463 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.463 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.463 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.463 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.724 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:37.724 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:38.297 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.297 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.297 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.297 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.297 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.297 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.297 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.297 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:38.297 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:38.558 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:38.558 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.558 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.558 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:38.558 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.558 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.558 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.558 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.558 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.558 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.558 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.558 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.558 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.819 00:17:38.819 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.819 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.819 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.080 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.080 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.080 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.080 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.080 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.080 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.080 { 00:17:39.080 "cntlid": 65, 00:17:39.080 "qid": 0, 00:17:39.080 "state": "enabled", 00:17:39.080 "thread": "nvmf_tgt_poll_group_000", 00:17:39.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.080 "listen_address": { 00:17:39.080 "trtype": "TCP", 00:17:39.080 "adrfam": "IPv4", 00:17:39.080 "traddr": "10.0.0.2", 00:17:39.080 "trsvcid": "4420" 00:17:39.080 }, 00:17:39.080 "peer_address": { 00:17:39.080 "trtype": "TCP", 00:17:39.080 "adrfam": "IPv4", 00:17:39.080 "traddr": "10.0.0.1", 00:17:39.080 "trsvcid": "59550" 00:17:39.080 }, 00:17:39.080 "auth": { 00:17:39.080 "state": "completed", 00:17:39.080 "digest": "sha384", 00:17:39.080 "dhgroup": "ffdhe3072" 00:17:39.080 } 00:17:39.080 } 00:17:39.080 ]' 00:17:39.080 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.080 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.080 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.080 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:39.080 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.080 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.080 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.080 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.340 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:39.340 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:39.912 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.912 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.912 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.912 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.912 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.912 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.912 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.912 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:40.172 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:40.172 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.172 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.172 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:40.172 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:40.172 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.172 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.172 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.172 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.172 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.172 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.172 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.172 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.432 00:17:40.432 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.432 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.432 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.692 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.692 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.692 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.692 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.692 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.692 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.692 { 00:17:40.692 "cntlid": 67, 00:17:40.692 "qid": 0, 00:17:40.692 "state": "enabled", 00:17:40.692 "thread": "nvmf_tgt_poll_group_000", 00:17:40.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.692 "listen_address": { 00:17:40.692 "trtype": "TCP", 00:17:40.692 "adrfam": "IPv4", 00:17:40.692 "traddr": "10.0.0.2", 00:17:40.692 "trsvcid": "4420" 00:17:40.692 }, 00:17:40.692 "peer_address": { 00:17:40.692 "trtype": "TCP", 00:17:40.692 "adrfam": "IPv4", 00:17:40.692 "traddr": "10.0.0.1", 00:17:40.692 "trsvcid": "59580" 00:17:40.692 }, 00:17:40.692 "auth": { 00:17:40.692 "state": "completed", 00:17:40.692 "digest": "sha384", 00:17:40.692 "dhgroup": "ffdhe3072" 00:17:40.692 } 00:17:40.692 } 00:17:40.692 ]' 00:17:40.692 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.692 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.692 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.692 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:40.692 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.692 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.692 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.692 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.952 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:40.952 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:41.524 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.524 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.524 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.524 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.524 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.524 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.524 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:41.524 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:41.785 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:41.785 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.785 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:41.785 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:41.786 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.786 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.786 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.786 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.786 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.786 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.786 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.786 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.786 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.046 00:17:42.046 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.046 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.046 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.307 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.307 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.307 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.307 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.307 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.307 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.307 { 00:17:42.307 "cntlid": 69, 00:17:42.307 "qid": 0, 00:17:42.307 "state": "enabled", 00:17:42.307 "thread": "nvmf_tgt_poll_group_000", 00:17:42.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:42.307 "listen_address": { 00:17:42.307 "trtype": "TCP", 00:17:42.307 "adrfam": "IPv4", 00:17:42.307 "traddr": "10.0.0.2", 00:17:42.307 "trsvcid": "4420" 00:17:42.307 }, 00:17:42.307 "peer_address": { 00:17:42.308 "trtype": "TCP", 00:17:42.308 "adrfam": "IPv4", 00:17:42.308 "traddr": "10.0.0.1", 00:17:42.308 "trsvcid": "59602" 00:17:42.308 }, 00:17:42.308 "auth": { 00:17:42.308 "state": "completed", 00:17:42.308 "digest": "sha384", 00:17:42.308 "dhgroup": "ffdhe3072" 00:17:42.308 } 00:17:42.308 } 00:17:42.308 ]' 00:17:42.308 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.308 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.308 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.308 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.308 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.308 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.308 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.308 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.568 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:42.568 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:43.140 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.140 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.140 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.140 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.140 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.140 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.140 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:43.140 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:43.400 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:43.400 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.400 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.400 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:43.400 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.400 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.400 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:43.400 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.400 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.400 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.400 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.400 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.400 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.661 00:17:43.661 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.661 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.661 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.928 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.928 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.928 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.928 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.928 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.928 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.928 { 00:17:43.928 "cntlid": 71, 00:17:43.928 "qid": 0, 00:17:43.928 "state": "enabled", 00:17:43.929 "thread": "nvmf_tgt_poll_group_000", 00:17:43.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.929 "listen_address": { 00:17:43.929 "trtype": "TCP", 00:17:43.929 "adrfam": "IPv4", 00:17:43.929 "traddr": "10.0.0.2", 00:17:43.929 "trsvcid": "4420" 00:17:43.929 }, 00:17:43.929 "peer_address": { 00:17:43.929 "trtype": "TCP", 00:17:43.929 "adrfam": "IPv4", 00:17:43.929 "traddr": "10.0.0.1", 00:17:43.929 "trsvcid": "41664" 00:17:43.929 }, 00:17:43.929 "auth": { 00:17:43.929 "state": "completed", 00:17:43.929 "digest": "sha384", 00:17:43.929 "dhgroup": "ffdhe3072" 00:17:43.929 } 00:17:43.929 } 00:17:43.929 ]' 00:17:43.929 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.929 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.929 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.929 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:43.929 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.929 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.929 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.929 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.193 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:44.193 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:44.765 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.765 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.765 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.765 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.765 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.765 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.765 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.765 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:44.765 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:45.026 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:45.026 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.026 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.026 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:45.026 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.026 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.026 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.026 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.026 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.026 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.026 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.026 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.026 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.287 00:17:45.287 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.287 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.287 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.547 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.547 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.547 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.547 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.547 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.547 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.547 { 00:17:45.547 "cntlid": 73, 00:17:45.547 "qid": 0, 00:17:45.547 "state": "enabled", 00:17:45.547 "thread": "nvmf_tgt_poll_group_000", 00:17:45.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:45.548 "listen_address": { 00:17:45.548 "trtype": "TCP", 00:17:45.548 "adrfam": "IPv4", 00:17:45.548 "traddr": "10.0.0.2", 00:17:45.548 "trsvcid": "4420" 00:17:45.548 }, 00:17:45.548 "peer_address": { 00:17:45.548 "trtype": "TCP", 00:17:45.548 "adrfam": "IPv4", 00:17:45.548 "traddr": "10.0.0.1", 00:17:45.548 "trsvcid": "41678" 00:17:45.548 }, 00:17:45.548 "auth": { 00:17:45.548 "state": "completed", 00:17:45.548 "digest": "sha384", 00:17:45.548 "dhgroup": "ffdhe4096" 00:17:45.548 } 00:17:45.548 } 00:17:45.548 ]' 00:17:45.548 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.548 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.548 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.548 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:45.548 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.548 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.548 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.548 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.809 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:45.809 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:46.381 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.381 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.381 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.381 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.381 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.381 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.381 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:46.381 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:46.642 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:46.642 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.642 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:46.642 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:46.642 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:46.642 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.642 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.642 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.642 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.642 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.642 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.642 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.642 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.903 00:17:46.903 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.903 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.903 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.163 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.163 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.163 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.163 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.163 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.163 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.163 { 00:17:47.163 "cntlid": 75, 00:17:47.163 "qid": 0, 00:17:47.163 "state": "enabled", 00:17:47.163 "thread": "nvmf_tgt_poll_group_000", 00:17:47.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.163 "listen_address": { 00:17:47.163 "trtype": "TCP", 00:17:47.163 "adrfam": "IPv4", 00:17:47.163 "traddr": "10.0.0.2", 00:17:47.163 "trsvcid": "4420" 00:17:47.163 }, 00:17:47.163 "peer_address": { 00:17:47.163 "trtype": "TCP", 00:17:47.163 "adrfam": "IPv4", 00:17:47.163 "traddr": "10.0.0.1", 00:17:47.163 "trsvcid": "41694" 00:17:47.163 }, 00:17:47.163 "auth": { 00:17:47.163 "state": "completed", 00:17:47.163 "digest": "sha384", 00:17:47.163 "dhgroup": "ffdhe4096" 00:17:47.163 } 00:17:47.163 } 00:17:47.163 ]' 00:17:47.163 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.163 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.163 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.163 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.163 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.163 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.163 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.163 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.423 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:47.423 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:47.997 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.997 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.258 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.258 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.258 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.259 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.519 00:17:48.519 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.519 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.519 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.781 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.781 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.781 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.781 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.781 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.781 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.781 { 00:17:48.781 "cntlid": 77, 00:17:48.781 "qid": 0, 00:17:48.781 "state": "enabled", 00:17:48.781 "thread": "nvmf_tgt_poll_group_000", 00:17:48.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.781 "listen_address": { 00:17:48.781 "trtype": "TCP", 00:17:48.781 "adrfam": "IPv4", 00:17:48.781 "traddr": "10.0.0.2", 00:17:48.781 "trsvcid": "4420" 00:17:48.781 }, 00:17:48.781 "peer_address": { 00:17:48.781 "trtype": "TCP", 00:17:48.781 "adrfam": "IPv4", 00:17:48.781 "traddr": "10.0.0.1", 00:17:48.781 "trsvcid": "41728" 00:17:48.781 }, 00:17:48.781 "auth": { 00:17:48.781 "state": "completed", 00:17:48.781 "digest": "sha384", 00:17:48.781 "dhgroup": "ffdhe4096" 00:17:48.781 } 00:17:48.781 } 00:17:48.781 ]' 00:17:48.781 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.781 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.781 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.781 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:48.781 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.781 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.781 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.781 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.042 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:49.042 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:49.614 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.614 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.614 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.614 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.614 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.614 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.614 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:49.614 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:49.874 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:49.874 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.874 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.874 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:49.874 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.874 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.874 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:49.874 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.874 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.874 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.874 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.874 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.874 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.135 00:17:50.135 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.135 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.135 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.396 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.396 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.396 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.396 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.396 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.396 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.396 { 00:17:50.396 "cntlid": 79, 00:17:50.396 "qid": 0, 00:17:50.396 "state": "enabled", 00:17:50.396 "thread": "nvmf_tgt_poll_group_000", 00:17:50.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.396 "listen_address": { 00:17:50.396 "trtype": "TCP", 00:17:50.396 "adrfam": "IPv4", 00:17:50.396 "traddr": "10.0.0.2", 00:17:50.396 "trsvcid": "4420" 00:17:50.396 }, 00:17:50.396 "peer_address": { 00:17:50.396 "trtype": "TCP", 00:17:50.396 "adrfam": "IPv4", 00:17:50.396 "traddr": "10.0.0.1", 00:17:50.396 "trsvcid": "41756" 00:17:50.396 }, 00:17:50.396 "auth": { 00:17:50.396 "state": "completed", 00:17:50.396 "digest": "sha384", 00:17:50.396 "dhgroup": "ffdhe4096" 00:17:50.396 } 00:17:50.396 } 00:17:50.396 ]' 00:17:50.396 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.396 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.396 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.396 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.396 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.396 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.396 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.396 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.657 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:50.657 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:51.228 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.228 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.228 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.228 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.228 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.228 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.228 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.228 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:51.228 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:51.489 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:51.489 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.489 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:51.489 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:51.489 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:51.489 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.489 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.489 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.489 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.489 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.489 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.489 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.489 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.750 00:17:51.750 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.750 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.750 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.011 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.011 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.011 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.011 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.011 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.011 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.011 { 00:17:52.011 "cntlid": 81, 00:17:52.011 "qid": 0, 00:17:52.011 "state": "enabled", 00:17:52.011 "thread": "nvmf_tgt_poll_group_000", 00:17:52.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.011 "listen_address": { 00:17:52.011 "trtype": "TCP", 00:17:52.011 "adrfam": "IPv4", 00:17:52.011 "traddr": "10.0.0.2", 00:17:52.011 "trsvcid": "4420" 00:17:52.011 }, 00:17:52.011 "peer_address": { 00:17:52.011 "trtype": "TCP", 00:17:52.011 "adrfam": "IPv4", 00:17:52.011 "traddr": "10.0.0.1", 00:17:52.011 "trsvcid": "41776" 00:17:52.011 }, 00:17:52.011 "auth": { 00:17:52.011 "state": "completed", 00:17:52.011 "digest": "sha384", 00:17:52.011 "dhgroup": "ffdhe6144" 00:17:52.011 } 00:17:52.011 } 00:17:52.011 ]' 00:17:52.011 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.011 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.011 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.011 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:52.011 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.271 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.271 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.271 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.271 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:52.271 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:52.841 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.102 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.675 00:17:53.675 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.675 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.675 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.675 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.675 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.675 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.675 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.675 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.675 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.675 { 00:17:53.675 "cntlid": 83, 00:17:53.675 "qid": 0, 00:17:53.675 "state": "enabled", 00:17:53.675 "thread": "nvmf_tgt_poll_group_000", 00:17:53.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.675 "listen_address": { 00:17:53.675 "trtype": "TCP", 00:17:53.675 "adrfam": "IPv4", 00:17:53.675 "traddr": "10.0.0.2", 00:17:53.675 "trsvcid": "4420" 00:17:53.675 }, 00:17:53.675 "peer_address": { 00:17:53.675 "trtype": "TCP", 00:17:53.675 "adrfam": "IPv4", 00:17:53.675 "traddr": "10.0.0.1", 00:17:53.675 "trsvcid": "49966" 00:17:53.675 }, 00:17:53.675 "auth": { 00:17:53.675 "state": "completed", 00:17:53.675 "digest": "sha384", 00:17:53.675 "dhgroup": "ffdhe6144" 00:17:53.675 } 00:17:53.675 } 00:17:53.675 ]' 00:17:53.675 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.675 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.675 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.936 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.936 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.936 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.936 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.936 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.936 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:53.936 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:17:54.966 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.966 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.966 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.966 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.966 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.966 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.966 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:54.966 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:54.966 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:54.966 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.966 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:54.966 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:54.966 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.966 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.966 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.966 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.966 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.966 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.966 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.966 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.966 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.267 00:17:55.267 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.267 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.267 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.529 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.529 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.529 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.529 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.529 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.529 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.529 { 00:17:55.529 "cntlid": 85, 00:17:55.529 "qid": 0, 00:17:55.529 "state": "enabled", 00:17:55.529 "thread": "nvmf_tgt_poll_group_000", 00:17:55.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.529 "listen_address": { 00:17:55.529 "trtype": "TCP", 00:17:55.529 "adrfam": "IPv4", 00:17:55.529 "traddr": "10.0.0.2", 00:17:55.529 "trsvcid": "4420" 00:17:55.529 }, 00:17:55.529 "peer_address": { 00:17:55.529 "trtype": "TCP", 00:17:55.529 "adrfam": "IPv4", 00:17:55.529 "traddr": "10.0.0.1", 00:17:55.529 "trsvcid": "49990" 00:17:55.529 }, 00:17:55.529 "auth": { 00:17:55.529 "state": "completed", 00:17:55.529 "digest": "sha384", 00:17:55.529 "dhgroup": "ffdhe6144" 00:17:55.529 } 00:17:55.529 } 00:17:55.529 ]' 00:17:55.529 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.529 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.529 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.529 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:55.529 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.529 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.529 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.529 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.789 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:55.789 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:17:56.360 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.360 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.360 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.360 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.360 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.360 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.360 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:56.360 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:56.620 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:56.620 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.620 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:56.620 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:56.620 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:56.620 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.620 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:56.620 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.620 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.620 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.620 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:56.620 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.620 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.880 00:17:56.880 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.880 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.880 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.140 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.140 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.140 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.140 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.140 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.140 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.140 { 00:17:57.140 "cntlid": 87, 00:17:57.140 "qid": 0, 00:17:57.140 "state": "enabled", 00:17:57.140 "thread": "nvmf_tgt_poll_group_000", 00:17:57.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.140 "listen_address": { 00:17:57.140 "trtype": "TCP", 00:17:57.140 "adrfam": "IPv4", 00:17:57.140 "traddr": "10.0.0.2", 00:17:57.140 "trsvcid": "4420" 00:17:57.140 }, 00:17:57.140 "peer_address": { 00:17:57.140 "trtype": "TCP", 00:17:57.140 "adrfam": "IPv4", 00:17:57.140 "traddr": "10.0.0.1", 00:17:57.140 "trsvcid": "50028" 00:17:57.140 }, 00:17:57.140 "auth": { 00:17:57.140 "state": "completed", 00:17:57.140 "digest": "sha384", 00:17:57.140 "dhgroup": "ffdhe6144" 00:17:57.140 } 00:17:57.140 } 00:17:57.140 ]' 00:17:57.140 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.140 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.140 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.140 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:57.140 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.401 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.401 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.401 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.401 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:57.401 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:17:57.971 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.972 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.972 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.972 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.232 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.232 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.232 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.232 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:58.233 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:58.233 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:58.233 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.233 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:58.233 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:58.233 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.233 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.233 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.233 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.233 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.233 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.233 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.233 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.233 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.804 00:17:58.804 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.804 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.804 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.804 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.804 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.804 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.804 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.066 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.066 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.066 { 00:17:59.066 "cntlid": 89, 00:17:59.066 "qid": 0, 00:17:59.066 "state": "enabled", 00:17:59.066 "thread": "nvmf_tgt_poll_group_000", 00:17:59.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.066 "listen_address": { 00:17:59.066 "trtype": "TCP", 00:17:59.066 "adrfam": "IPv4", 00:17:59.066 "traddr": "10.0.0.2", 00:17:59.066 "trsvcid": "4420" 00:17:59.066 }, 00:17:59.066 "peer_address": { 00:17:59.066 "trtype": "TCP", 00:17:59.066 "adrfam": "IPv4", 00:17:59.066 "traddr": "10.0.0.1", 00:17:59.066 "trsvcid": "50060" 00:17:59.066 }, 00:17:59.066 "auth": { 00:17:59.066 "state": "completed", 00:17:59.066 "digest": "sha384", 00:17:59.066 "dhgroup": "ffdhe8192" 00:17:59.066 } 00:17:59.066 } 00:17:59.066 ]' 00:17:59.066 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.066 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.066 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.066 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.066 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.066 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.066 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.066 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.327 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:59.327 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:17:59.898 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.898 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.898 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.898 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.898 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.898 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.898 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:59.898 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:00.158 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:00.158 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.158 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:00.158 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:00.158 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:00.158 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.158 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.158 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.158 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.158 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.158 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.158 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.158 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.728 00:18:00.728 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.728 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.728 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.728 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.728 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.728 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.728 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.728 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.728 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.728 { 00:18:00.728 "cntlid": 91, 00:18:00.728 "qid": 0, 00:18:00.728 "state": "enabled", 00:18:00.728 "thread": "nvmf_tgt_poll_group_000", 00:18:00.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.728 "listen_address": { 00:18:00.728 "trtype": "TCP", 00:18:00.728 "adrfam": "IPv4", 00:18:00.728 "traddr": "10.0.0.2", 00:18:00.728 "trsvcid": "4420" 00:18:00.728 }, 00:18:00.728 "peer_address": { 00:18:00.728 "trtype": "TCP", 00:18:00.728 "adrfam": "IPv4", 00:18:00.728 "traddr": "10.0.0.1", 00:18:00.728 "trsvcid": "50086" 00:18:00.728 }, 00:18:00.728 "auth": { 00:18:00.728 "state": "completed", 00:18:00.728 "digest": "sha384", 00:18:00.728 "dhgroup": "ffdhe8192" 00:18:00.728 } 00:18:00.728 } 00:18:00.728 ]' 00:18:00.728 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.728 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.728 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.728 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.987 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.987 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.987 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.988 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.988 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:18:00.988 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:18:01.925 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.925 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.925 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.925 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.925 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.925 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.925 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:01.925 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:01.925 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:01.925 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.925 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:01.925 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:01.925 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:01.925 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.925 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.925 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.925 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.925 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.925 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.925 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.925 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.494 00:18:02.494 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.494 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.494 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.494 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.494 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.494 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.494 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.754 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.754 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.754 { 00:18:02.754 "cntlid": 93, 00:18:02.754 "qid": 0, 00:18:02.754 "state": "enabled", 00:18:02.754 "thread": "nvmf_tgt_poll_group_000", 00:18:02.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.754 "listen_address": { 00:18:02.754 "trtype": "TCP", 00:18:02.754 "adrfam": "IPv4", 00:18:02.754 "traddr": "10.0.0.2", 00:18:02.754 "trsvcid": "4420" 00:18:02.754 }, 00:18:02.754 "peer_address": { 00:18:02.754 "trtype": "TCP", 00:18:02.754 "adrfam": "IPv4", 00:18:02.754 "traddr": "10.0.0.1", 00:18:02.754 "trsvcid": "50124" 00:18:02.754 }, 00:18:02.754 "auth": { 00:18:02.754 "state": "completed", 00:18:02.754 "digest": "sha384", 00:18:02.754 "dhgroup": "ffdhe8192" 00:18:02.754 } 00:18:02.754 } 00:18:02.754 ]' 00:18:02.754 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.754 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.754 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.754 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.754 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.754 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.754 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.754 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.013 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:18:03.013 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:18:03.583 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.584 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.584 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.584 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.584 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.584 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.584 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:03.584 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:03.844 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:03.844 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.844 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:03.844 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.844 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.844 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.844 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:03.844 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.844 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.844 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.844 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.844 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.844 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.415 00:18:04.415 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.415 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.415 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.415 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.415 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.415 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.415 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.415 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.415 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.415 { 00:18:04.415 "cntlid": 95, 00:18:04.415 "qid": 0, 00:18:04.415 "state": "enabled", 00:18:04.415 "thread": "nvmf_tgt_poll_group_000", 00:18:04.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.415 "listen_address": { 00:18:04.415 "trtype": "TCP", 00:18:04.415 "adrfam": "IPv4", 00:18:04.415 "traddr": "10.0.0.2", 00:18:04.415 "trsvcid": "4420" 00:18:04.415 }, 00:18:04.415 "peer_address": { 00:18:04.415 "trtype": "TCP", 00:18:04.415 "adrfam": "IPv4", 00:18:04.415 "traddr": "10.0.0.1", 00:18:04.415 "trsvcid": "36026" 00:18:04.415 }, 00:18:04.415 "auth": { 00:18:04.415 "state": "completed", 00:18:04.415 "digest": "sha384", 00:18:04.415 "dhgroup": "ffdhe8192" 00:18:04.415 } 00:18:04.415 } 00:18:04.415 ]' 00:18:04.415 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.415 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.415 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.675 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.675 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.675 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.675 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.675 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.676 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:04.676 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:05.626 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.626 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.626 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.626 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.626 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.626 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:05.626 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.627 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.893 00:18:05.893 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.893 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.893 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.154 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.154 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.154 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.154 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.154 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.154 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.154 { 00:18:06.154 "cntlid": 97, 00:18:06.154 "qid": 0, 00:18:06.154 "state": "enabled", 00:18:06.154 "thread": "nvmf_tgt_poll_group_000", 00:18:06.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.154 "listen_address": { 00:18:06.154 "trtype": "TCP", 00:18:06.154 "adrfam": "IPv4", 00:18:06.154 "traddr": "10.0.0.2", 00:18:06.154 "trsvcid": "4420" 00:18:06.154 }, 00:18:06.154 "peer_address": { 00:18:06.154 "trtype": "TCP", 00:18:06.154 "adrfam": "IPv4", 00:18:06.154 "traddr": "10.0.0.1", 00:18:06.154 "trsvcid": "36066" 00:18:06.154 }, 00:18:06.154 "auth": { 00:18:06.154 "state": "completed", 00:18:06.154 "digest": "sha512", 00:18:06.154 "dhgroup": "null" 00:18:06.154 } 00:18:06.154 } 00:18:06.154 ]' 00:18:06.154 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.154 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.154 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.154 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:06.154 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.154 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.154 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.154 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.414 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:18:06.414 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:18:06.985 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.985 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.985 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.985 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.985 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.985 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.985 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:06.985 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:07.247 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:07.247 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.247 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.247 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:07.247 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:07.247 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.247 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.247 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.247 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.247 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.247 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.247 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.247 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.508 00:18:07.508 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.508 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.508 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.508 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.508 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.508 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.508 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.508 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.508 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.508 { 00:18:07.508 "cntlid": 99, 00:18:07.508 "qid": 0, 00:18:07.508 "state": "enabled", 00:18:07.508 "thread": "nvmf_tgt_poll_group_000", 00:18:07.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.508 "listen_address": { 00:18:07.508 "trtype": "TCP", 00:18:07.508 "adrfam": "IPv4", 00:18:07.508 "traddr": "10.0.0.2", 00:18:07.508 "trsvcid": "4420" 00:18:07.508 }, 00:18:07.508 "peer_address": { 00:18:07.508 "trtype": "TCP", 00:18:07.508 "adrfam": "IPv4", 00:18:07.508 "traddr": "10.0.0.1", 00:18:07.508 "trsvcid": "36106" 00:18:07.508 }, 00:18:07.508 "auth": { 00:18:07.508 "state": "completed", 00:18:07.508 "digest": "sha512", 00:18:07.508 "dhgroup": "null" 00:18:07.508 } 00:18:07.508 } 00:18:07.508 ]' 00:18:07.508 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.770 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.770 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.770 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:07.770 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.770 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.770 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.770 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.030 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:18:08.030 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:18:08.602 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.602 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.602 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.602 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.602 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.602 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.602 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:08.602 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:08.862 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:08.862 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.862 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.862 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:08.862 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:08.862 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.862 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.862 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.862 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.862 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.862 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.862 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.862 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.862 00:18:09.122 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.122 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.122 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.122 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.122 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.122 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.122 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.122 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.122 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.122 { 00:18:09.122 "cntlid": 101, 00:18:09.122 "qid": 0, 00:18:09.122 "state": "enabled", 00:18:09.122 "thread": "nvmf_tgt_poll_group_000", 00:18:09.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.123 "listen_address": { 00:18:09.123 "trtype": "TCP", 00:18:09.123 "adrfam": "IPv4", 00:18:09.123 "traddr": "10.0.0.2", 00:18:09.123 "trsvcid": "4420" 00:18:09.123 }, 00:18:09.123 "peer_address": { 00:18:09.123 "trtype": "TCP", 00:18:09.123 "adrfam": "IPv4", 00:18:09.123 "traddr": "10.0.0.1", 00:18:09.123 "trsvcid": "36132" 00:18:09.123 }, 00:18:09.123 "auth": { 00:18:09.123 "state": "completed", 00:18:09.123 "digest": "sha512", 00:18:09.123 "dhgroup": "null" 00:18:09.123 } 00:18:09.123 } 00:18:09.123 ]' 00:18:09.123 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.382 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.382 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.382 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:09.382 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.382 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.382 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.382 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.643 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:18:09.643 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:18:10.214 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.214 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.214 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.214 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.214 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.214 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.214 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:10.214 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.475 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.475 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.736 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.736 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.736 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.736 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.736 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.736 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.736 { 00:18:10.736 "cntlid": 103, 00:18:10.736 "qid": 0, 00:18:10.736 "state": "enabled", 00:18:10.736 "thread": "nvmf_tgt_poll_group_000", 00:18:10.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:10.736 "listen_address": { 00:18:10.736 "trtype": "TCP", 00:18:10.736 "adrfam": "IPv4", 00:18:10.736 "traddr": "10.0.0.2", 00:18:10.736 "trsvcid": "4420" 00:18:10.736 }, 00:18:10.736 "peer_address": { 00:18:10.736 "trtype": "TCP", 00:18:10.736 "adrfam": "IPv4", 00:18:10.736 "traddr": "10.0.0.1", 00:18:10.736 "trsvcid": "36170" 00:18:10.736 }, 00:18:10.736 "auth": { 00:18:10.736 "state": "completed", 00:18:10.736 "digest": "sha512", 00:18:10.736 "dhgroup": "null" 00:18:10.736 } 00:18:10.736 } 00:18:10.736 ]' 00:18:10.736 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.736 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.736 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.996 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:10.996 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.996 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.996 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.996 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.996 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:10.996 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:11.939 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.939 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.939 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.939 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.939 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.939 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.939 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.939 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:11.939 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:11.939 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:11.939 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.939 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.939 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:11.939 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:11.939 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.939 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.939 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.939 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.939 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.939 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.939 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.939 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.200 00:18:12.200 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.200 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.200 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.460 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.460 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.460 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.460 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.460 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.460 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.461 { 00:18:12.461 "cntlid": 105, 00:18:12.461 "qid": 0, 00:18:12.461 "state": "enabled", 00:18:12.461 "thread": "nvmf_tgt_poll_group_000", 00:18:12.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.461 "listen_address": { 00:18:12.461 "trtype": "TCP", 00:18:12.461 "adrfam": "IPv4", 00:18:12.461 "traddr": "10.0.0.2", 00:18:12.461 "trsvcid": "4420" 00:18:12.461 }, 00:18:12.461 "peer_address": { 00:18:12.461 "trtype": "TCP", 00:18:12.461 "adrfam": "IPv4", 00:18:12.461 "traddr": "10.0.0.1", 00:18:12.461 "trsvcid": "36208" 00:18:12.461 }, 00:18:12.461 "auth": { 00:18:12.461 "state": "completed", 00:18:12.461 "digest": "sha512", 00:18:12.461 "dhgroup": "ffdhe2048" 00:18:12.461 } 00:18:12.461 } 00:18:12.461 ]' 00:18:12.461 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.461 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.461 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.461 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:12.461 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.461 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.461 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.461 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.722 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:18:12.722 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:18:13.293 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.294 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.294 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.294 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.294 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.294 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.294 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:13.294 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:13.554 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:13.554 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.554 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.554 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:13.554 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:13.554 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.554 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.554 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.554 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.554 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.554 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.554 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.555 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.816 00:18:13.816 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.816 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.816 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.077 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.077 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.077 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.077 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.077 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.077 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.077 { 00:18:14.077 "cntlid": 107, 00:18:14.077 "qid": 0, 00:18:14.077 "state": "enabled", 00:18:14.077 "thread": "nvmf_tgt_poll_group_000", 00:18:14.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.077 "listen_address": { 00:18:14.077 "trtype": "TCP", 00:18:14.077 "adrfam": "IPv4", 00:18:14.077 "traddr": "10.0.0.2", 00:18:14.077 "trsvcid": "4420" 00:18:14.077 }, 00:18:14.077 "peer_address": { 00:18:14.077 "trtype": "TCP", 00:18:14.077 "adrfam": "IPv4", 00:18:14.077 "traddr": "10.0.0.1", 00:18:14.077 "trsvcid": "39794" 00:18:14.077 }, 00:18:14.077 "auth": { 00:18:14.077 "state": "completed", 00:18:14.077 "digest": "sha512", 00:18:14.077 "dhgroup": "ffdhe2048" 00:18:14.077 } 00:18:14.077 } 00:18:14.077 ]' 00:18:14.077 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.077 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.077 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.077 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:14.077 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.077 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.077 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.077 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.338 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:18:14.338 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:18:14.909 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.909 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.909 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.909 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.909 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.909 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.909 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:14.909 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:15.170 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:15.170 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.170 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.170 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:15.170 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:15.170 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.170 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.170 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.170 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.170 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.170 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.170 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.170 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.431 00:18:15.431 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.432 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.432 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.692 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.692 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.692 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.692 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.692 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.692 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.692 { 00:18:15.692 "cntlid": 109, 00:18:15.692 "qid": 0, 00:18:15.692 "state": "enabled", 00:18:15.692 "thread": "nvmf_tgt_poll_group_000", 00:18:15.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:15.692 "listen_address": { 00:18:15.692 "trtype": "TCP", 00:18:15.692 "adrfam": "IPv4", 00:18:15.692 "traddr": "10.0.0.2", 00:18:15.692 "trsvcid": "4420" 00:18:15.692 }, 00:18:15.692 "peer_address": { 00:18:15.693 "trtype": "TCP", 00:18:15.693 "adrfam": "IPv4", 00:18:15.693 "traddr": "10.0.0.1", 00:18:15.693 "trsvcid": "39830" 00:18:15.693 }, 00:18:15.693 "auth": { 00:18:15.693 "state": "completed", 00:18:15.693 "digest": "sha512", 00:18:15.693 "dhgroup": "ffdhe2048" 00:18:15.693 } 00:18:15.693 } 00:18:15.693 ]' 00:18:15.693 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.693 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.693 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.693 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:15.693 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.693 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.693 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.693 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.953 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:18:15.953 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:18:16.525 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.525 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.525 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.525 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.525 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.525 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.525 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:16.525 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:16.786 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:16.786 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.786 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.786 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:16.786 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:16.786 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.786 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:16.786 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.786 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.786 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.786 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.786 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.786 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.049 00:18:17.049 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.049 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.049 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.310 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.310 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.310 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.310 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.310 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.310 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.310 { 00:18:17.310 "cntlid": 111, 00:18:17.310 "qid": 0, 00:18:17.310 "state": "enabled", 00:18:17.310 "thread": "nvmf_tgt_poll_group_000", 00:18:17.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.310 "listen_address": { 00:18:17.310 "trtype": "TCP", 00:18:17.310 "adrfam": "IPv4", 00:18:17.310 "traddr": "10.0.0.2", 00:18:17.310 "trsvcid": "4420" 00:18:17.310 }, 00:18:17.310 "peer_address": { 00:18:17.310 "trtype": "TCP", 00:18:17.310 "adrfam": "IPv4", 00:18:17.310 "traddr": "10.0.0.1", 00:18:17.310 "trsvcid": "39848" 00:18:17.310 }, 00:18:17.310 "auth": { 00:18:17.310 "state": "completed", 00:18:17.310 "digest": "sha512", 00:18:17.310 "dhgroup": "ffdhe2048" 00:18:17.310 } 00:18:17.310 } 00:18:17.310 ]' 00:18:17.310 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.310 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.310 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.310 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:17.310 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.310 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.310 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.310 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.571 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:17.571 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:18.150 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.150 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.150 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.150 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.150 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.150 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.150 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.150 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:18.150 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:18.411 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:18.411 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.411 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.411 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:18.411 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:18.411 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.411 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.411 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.411 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.411 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.411 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.411 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.411 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.672 00:18:18.672 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.672 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.672 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.933 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.933 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.933 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.933 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.933 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.933 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.933 { 00:18:18.933 "cntlid": 113, 00:18:18.933 "qid": 0, 00:18:18.933 "state": "enabled", 00:18:18.933 "thread": "nvmf_tgt_poll_group_000", 00:18:18.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.933 "listen_address": { 00:18:18.933 "trtype": "TCP", 00:18:18.933 "adrfam": "IPv4", 00:18:18.933 "traddr": "10.0.0.2", 00:18:18.933 "trsvcid": "4420" 00:18:18.933 }, 00:18:18.933 "peer_address": { 00:18:18.933 "trtype": "TCP", 00:18:18.933 "adrfam": "IPv4", 00:18:18.933 "traddr": "10.0.0.1", 00:18:18.933 "trsvcid": "39874" 00:18:18.933 }, 00:18:18.933 "auth": { 00:18:18.933 "state": "completed", 00:18:18.933 "digest": "sha512", 00:18:18.933 "dhgroup": "ffdhe3072" 00:18:18.933 } 00:18:18.933 } 00:18:18.933 ]' 00:18:18.933 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.933 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.933 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.933 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:18.933 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.933 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.933 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.933 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.194 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:18:19.194 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:18:19.766 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.766 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.766 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.766 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.766 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.766 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.766 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:19.766 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:20.026 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:20.026 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.026 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.026 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:20.026 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:20.026 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.026 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.026 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.026 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.026 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.026 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.026 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.026 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.288 00:18:20.288 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.288 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.288 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.549 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.549 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.549 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.549 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.549 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.549 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.549 { 00:18:20.549 "cntlid": 115, 00:18:20.549 "qid": 0, 00:18:20.549 "state": "enabled", 00:18:20.549 "thread": "nvmf_tgt_poll_group_000", 00:18:20.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:20.549 "listen_address": { 00:18:20.549 "trtype": "TCP", 00:18:20.549 "adrfam": "IPv4", 00:18:20.549 "traddr": "10.0.0.2", 00:18:20.549 "trsvcid": "4420" 00:18:20.549 }, 00:18:20.549 "peer_address": { 00:18:20.549 "trtype": "TCP", 00:18:20.549 "adrfam": "IPv4", 00:18:20.549 "traddr": "10.0.0.1", 00:18:20.549 "trsvcid": "39916" 00:18:20.549 }, 00:18:20.549 "auth": { 00:18:20.549 "state": "completed", 00:18:20.549 "digest": "sha512", 00:18:20.549 "dhgroup": "ffdhe3072" 00:18:20.549 } 00:18:20.549 } 00:18:20.549 ]' 00:18:20.549 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.549 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.549 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.549 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:20.549 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.549 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.549 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.549 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.810 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:18:20.810 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:18:21.382 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.382 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.382 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.382 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.382 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.382 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.382 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:21.382 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:21.644 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:21.644 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.644 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.644 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:21.644 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:21.644 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.644 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.644 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.644 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.645 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.645 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.645 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.645 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.906 00:18:21.906 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.906 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.906 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.167 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.167 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.167 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.167 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.167 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.167 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.167 { 00:18:22.167 "cntlid": 117, 00:18:22.167 "qid": 0, 00:18:22.167 "state": "enabled", 00:18:22.167 "thread": "nvmf_tgt_poll_group_000", 00:18:22.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.167 "listen_address": { 00:18:22.167 "trtype": "TCP", 00:18:22.167 "adrfam": "IPv4", 00:18:22.167 "traddr": "10.0.0.2", 00:18:22.167 "trsvcid": "4420" 00:18:22.167 }, 00:18:22.167 "peer_address": { 00:18:22.167 "trtype": "TCP", 00:18:22.167 "adrfam": "IPv4", 00:18:22.167 "traddr": "10.0.0.1", 00:18:22.167 "trsvcid": "39944" 00:18:22.167 }, 00:18:22.167 "auth": { 00:18:22.167 "state": "completed", 00:18:22.167 "digest": "sha512", 00:18:22.167 "dhgroup": "ffdhe3072" 00:18:22.167 } 00:18:22.167 } 00:18:22.167 ]' 00:18:22.167 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.167 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.167 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.167 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.167 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.167 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.167 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.167 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.428 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:18:22.428 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:18:22.999 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.999 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.999 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.999 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.999 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.999 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.999 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:22.999 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:23.260 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:23.260 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.260 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.260 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:23.260 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:23.260 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.260 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:23.260 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.260 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.260 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.260 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.260 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.260 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.521 00:18:23.521 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.521 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.521 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.782 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.782 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.782 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.782 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.782 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.782 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.782 { 00:18:23.782 "cntlid": 119, 00:18:23.782 "qid": 0, 00:18:23.782 "state": "enabled", 00:18:23.782 "thread": "nvmf_tgt_poll_group_000", 00:18:23.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:23.782 "listen_address": { 00:18:23.782 "trtype": "TCP", 00:18:23.782 "adrfam": "IPv4", 00:18:23.782 "traddr": "10.0.0.2", 00:18:23.782 "trsvcid": "4420" 00:18:23.782 }, 00:18:23.782 "peer_address": { 00:18:23.782 "trtype": "TCP", 00:18:23.782 "adrfam": "IPv4", 00:18:23.782 "traddr": "10.0.0.1", 00:18:23.782 "trsvcid": "36188" 00:18:23.782 }, 00:18:23.782 "auth": { 00:18:23.782 "state": "completed", 00:18:23.782 "digest": "sha512", 00:18:23.782 "dhgroup": "ffdhe3072" 00:18:23.782 } 00:18:23.782 } 00:18:23.782 ]' 00:18:23.782 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.782 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.782 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.782 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:23.782 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.782 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.782 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.782 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.043 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:24.044 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:24.616 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.616 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.616 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.616 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.616 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.616 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.616 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.616 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:24.616 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:24.877 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:24.877 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.877 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.877 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:24.877 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:24.877 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.877 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.877 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.877 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.877 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.877 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.877 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.877 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.139 00:18:25.140 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.140 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.140 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.401 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.401 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.401 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.401 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.401 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.401 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.401 { 00:18:25.401 "cntlid": 121, 00:18:25.401 "qid": 0, 00:18:25.401 "state": "enabled", 00:18:25.401 "thread": "nvmf_tgt_poll_group_000", 00:18:25.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:25.401 "listen_address": { 00:18:25.401 "trtype": "TCP", 00:18:25.401 "adrfam": "IPv4", 00:18:25.401 "traddr": "10.0.0.2", 00:18:25.401 "trsvcid": "4420" 00:18:25.401 }, 00:18:25.401 "peer_address": { 00:18:25.401 "trtype": "TCP", 00:18:25.401 "adrfam": "IPv4", 00:18:25.401 "traddr": "10.0.0.1", 00:18:25.401 "trsvcid": "36218" 00:18:25.401 }, 00:18:25.401 "auth": { 00:18:25.401 "state": "completed", 00:18:25.401 "digest": "sha512", 00:18:25.401 "dhgroup": "ffdhe4096" 00:18:25.401 } 00:18:25.401 } 00:18:25.401 ]' 00:18:25.401 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.401 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.401 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.401 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:25.401 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.401 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.401 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.401 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.661 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:18:25.661 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:18:26.231 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.231 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.231 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.231 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.231 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.231 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.231 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:26.231 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:26.493 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:26.493 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.493 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.493 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:26.493 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:26.493 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.493 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.493 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.493 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.493 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.493 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.493 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.493 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.754 00:18:26.754 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.754 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.754 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.016 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.016 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.016 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.016 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.016 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.016 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.016 { 00:18:27.016 "cntlid": 123, 00:18:27.016 "qid": 0, 00:18:27.016 "state": "enabled", 00:18:27.016 "thread": "nvmf_tgt_poll_group_000", 00:18:27.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:27.016 "listen_address": { 00:18:27.016 "trtype": "TCP", 00:18:27.016 "adrfam": "IPv4", 00:18:27.016 "traddr": "10.0.0.2", 00:18:27.016 "trsvcid": "4420" 00:18:27.016 }, 00:18:27.016 "peer_address": { 00:18:27.016 "trtype": "TCP", 00:18:27.016 "adrfam": "IPv4", 00:18:27.016 "traddr": "10.0.0.1", 00:18:27.016 "trsvcid": "36242" 00:18:27.016 }, 00:18:27.016 "auth": { 00:18:27.016 "state": "completed", 00:18:27.016 "digest": "sha512", 00:18:27.016 "dhgroup": "ffdhe4096" 00:18:27.016 } 00:18:27.016 } 00:18:27.016 ]' 00:18:27.016 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.016 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.016 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.016 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:27.016 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.016 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.016 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.016 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.276 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:18:27.276 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:18:27.846 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.846 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.846 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.846 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.846 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.846 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.846 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:27.846 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:28.108 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:28.108 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.108 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.108 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:28.108 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:28.108 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.108 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.108 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.108 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.108 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.108 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.108 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.108 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.370 00:18:28.370 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.370 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.370 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.631 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.631 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.631 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.631 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.631 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.631 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.631 { 00:18:28.631 "cntlid": 125, 00:18:28.631 "qid": 0, 00:18:28.631 "state": "enabled", 00:18:28.631 "thread": "nvmf_tgt_poll_group_000", 00:18:28.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.631 "listen_address": { 00:18:28.631 "trtype": "TCP", 00:18:28.631 "adrfam": "IPv4", 00:18:28.631 "traddr": "10.0.0.2", 00:18:28.631 "trsvcid": "4420" 00:18:28.631 }, 00:18:28.631 "peer_address": { 00:18:28.631 "trtype": "TCP", 00:18:28.631 "adrfam": "IPv4", 00:18:28.631 "traddr": "10.0.0.1", 00:18:28.631 "trsvcid": "36270" 00:18:28.631 }, 00:18:28.631 "auth": { 00:18:28.631 "state": "completed", 00:18:28.631 "digest": "sha512", 00:18:28.631 "dhgroup": "ffdhe4096" 00:18:28.631 } 00:18:28.631 } 00:18:28.631 ]' 00:18:28.631 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.631 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.631 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.631 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:28.631 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.631 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.631 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.631 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.891 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:18:28.891 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:18:29.462 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.462 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.462 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.462 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.462 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.462 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.462 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:29.462 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:29.722 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:29.722 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.722 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.722 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:29.722 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:29.722 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.722 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:29.722 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.722 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.722 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.722 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:29.722 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.722 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.983 00:18:29.983 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.983 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.983 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.245 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.245 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.245 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.245 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.245 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.245 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.245 { 00:18:30.245 "cntlid": 127, 00:18:30.245 "qid": 0, 00:18:30.245 "state": "enabled", 00:18:30.245 "thread": "nvmf_tgt_poll_group_000", 00:18:30.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:30.245 "listen_address": { 00:18:30.245 "trtype": "TCP", 00:18:30.245 "adrfam": "IPv4", 00:18:30.245 "traddr": "10.0.0.2", 00:18:30.245 "trsvcid": "4420" 00:18:30.245 }, 00:18:30.245 "peer_address": { 00:18:30.245 "trtype": "TCP", 00:18:30.245 "adrfam": "IPv4", 00:18:30.245 "traddr": "10.0.0.1", 00:18:30.245 "trsvcid": "36294" 00:18:30.245 }, 00:18:30.245 "auth": { 00:18:30.245 "state": "completed", 00:18:30.245 "digest": "sha512", 00:18:30.245 "dhgroup": "ffdhe4096" 00:18:30.245 } 00:18:30.245 } 00:18:30.245 ]' 00:18:30.245 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.245 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.245 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.245 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:30.245 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.245 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.245 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.245 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.506 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:30.506 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:31.077 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.077 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.077 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.077 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.077 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.077 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.077 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.077 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:31.077 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:31.337 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:31.337 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.337 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:31.337 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:31.338 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:31.338 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.338 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.338 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.338 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.338 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.338 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.338 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.338 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.598 00:18:31.598 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.598 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.598 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.860 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.860 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.860 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.860 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.860 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.860 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.860 { 00:18:31.860 "cntlid": 129, 00:18:31.860 "qid": 0, 00:18:31.860 "state": "enabled", 00:18:31.860 "thread": "nvmf_tgt_poll_group_000", 00:18:31.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.860 "listen_address": { 00:18:31.860 "trtype": "TCP", 00:18:31.860 "adrfam": "IPv4", 00:18:31.860 "traddr": "10.0.0.2", 00:18:31.860 "trsvcid": "4420" 00:18:31.860 }, 00:18:31.860 "peer_address": { 00:18:31.860 "trtype": "TCP", 00:18:31.860 "adrfam": "IPv4", 00:18:31.860 "traddr": "10.0.0.1", 00:18:31.860 "trsvcid": "36332" 00:18:31.860 }, 00:18:31.860 "auth": { 00:18:31.860 "state": "completed", 00:18:31.860 "digest": "sha512", 00:18:31.860 "dhgroup": "ffdhe6144" 00:18:31.860 } 00:18:31.860 } 00:18:31.860 ]' 00:18:31.860 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.860 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.860 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.860 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:31.860 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.860 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.860 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.860 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.122 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:18:32.122 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:18:32.693 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.693 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.693 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.693 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.693 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.693 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.693 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:32.693 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:32.954 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:32.954 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.954 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.954 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:32.954 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:32.954 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.954 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.954 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.954 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.954 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.954 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.954 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.954 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.271 00:18:33.271 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.271 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.271 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.553 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.553 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.553 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.553 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.553 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.553 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.553 { 00:18:33.553 "cntlid": 131, 00:18:33.553 "qid": 0, 00:18:33.553 "state": "enabled", 00:18:33.553 "thread": "nvmf_tgt_poll_group_000", 00:18:33.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:33.553 "listen_address": { 00:18:33.553 "trtype": "TCP", 00:18:33.553 "adrfam": "IPv4", 00:18:33.553 "traddr": "10.0.0.2", 00:18:33.553 "trsvcid": "4420" 00:18:33.553 }, 00:18:33.553 "peer_address": { 00:18:33.553 "trtype": "TCP", 00:18:33.553 "adrfam": "IPv4", 00:18:33.553 "traddr": "10.0.0.1", 00:18:33.553 "trsvcid": "39334" 00:18:33.553 }, 00:18:33.553 "auth": { 00:18:33.553 "state": "completed", 00:18:33.553 "digest": "sha512", 00:18:33.553 "dhgroup": "ffdhe6144" 00:18:33.553 } 00:18:33.553 } 00:18:33.553 ]' 00:18:33.553 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.553 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.553 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.553 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:33.553 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.553 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.553 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.553 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.891 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:18:33.891 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:18:34.462 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.462 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.462 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.462 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.462 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.462 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.462 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:34.462 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:34.723 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:34.723 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.723 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.723 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:34.723 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:34.723 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.723 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.723 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.723 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.723 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.723 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.724 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.724 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.987 00:18:34.987 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.987 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.987 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.251 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.251 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.251 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.251 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.251 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.251 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.251 { 00:18:35.251 "cntlid": 133, 00:18:35.251 "qid": 0, 00:18:35.251 "state": "enabled", 00:18:35.251 "thread": "nvmf_tgt_poll_group_000", 00:18:35.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.251 "listen_address": { 00:18:35.251 "trtype": "TCP", 00:18:35.251 "adrfam": "IPv4", 00:18:35.251 "traddr": "10.0.0.2", 00:18:35.251 "trsvcid": "4420" 00:18:35.251 }, 00:18:35.251 "peer_address": { 00:18:35.251 "trtype": "TCP", 00:18:35.251 "adrfam": "IPv4", 00:18:35.251 "traddr": "10.0.0.1", 00:18:35.251 "trsvcid": "39372" 00:18:35.251 }, 00:18:35.251 "auth": { 00:18:35.251 "state": "completed", 00:18:35.251 "digest": "sha512", 00:18:35.251 "dhgroup": "ffdhe6144" 00:18:35.251 } 00:18:35.251 } 00:18:35.251 ]' 00:18:35.251 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.251 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.251 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.251 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:35.251 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.251 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.251 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.251 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.511 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:18:35.511 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:18:36.083 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.083 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.083 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.083 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.083 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.083 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.083 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:36.084 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:36.344 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:36.344 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.344 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:36.344 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:36.344 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:36.344 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.344 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:36.344 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.344 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.344 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.344 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.344 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.344 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.604 00:18:36.604 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.604 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.604 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.866 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.866 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.866 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.866 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.866 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.867 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.867 { 00:18:36.867 "cntlid": 135, 00:18:36.867 "qid": 0, 00:18:36.867 "state": "enabled", 00:18:36.867 "thread": "nvmf_tgt_poll_group_000", 00:18:36.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:36.867 "listen_address": { 00:18:36.867 "trtype": "TCP", 00:18:36.867 "adrfam": "IPv4", 00:18:36.867 "traddr": "10.0.0.2", 00:18:36.867 "trsvcid": "4420" 00:18:36.867 }, 00:18:36.867 "peer_address": { 00:18:36.867 "trtype": "TCP", 00:18:36.867 "adrfam": "IPv4", 00:18:36.867 "traddr": "10.0.0.1", 00:18:36.867 "trsvcid": "39418" 00:18:36.867 }, 00:18:36.867 "auth": { 00:18:36.867 "state": "completed", 00:18:36.867 "digest": "sha512", 00:18:36.867 "dhgroup": "ffdhe6144" 00:18:36.867 } 00:18:36.867 } 00:18:36.867 ]' 00:18:36.867 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.867 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.867 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.127 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:37.127 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.127 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.127 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.127 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.127 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:37.127 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:38.069 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.069 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.641 00:18:38.641 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.641 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.641 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.641 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.641 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.641 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.641 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.641 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.641 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.641 { 00:18:38.641 "cntlid": 137, 00:18:38.641 "qid": 0, 00:18:38.641 "state": "enabled", 00:18:38.641 "thread": "nvmf_tgt_poll_group_000", 00:18:38.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.641 "listen_address": { 00:18:38.641 "trtype": "TCP", 00:18:38.641 "adrfam": "IPv4", 00:18:38.641 "traddr": "10.0.0.2", 00:18:38.641 "trsvcid": "4420" 00:18:38.641 }, 00:18:38.641 "peer_address": { 00:18:38.641 "trtype": "TCP", 00:18:38.641 "adrfam": "IPv4", 00:18:38.641 "traddr": "10.0.0.1", 00:18:38.641 "trsvcid": "39430" 00:18:38.641 }, 00:18:38.641 "auth": { 00:18:38.641 "state": "completed", 00:18:38.641 "digest": "sha512", 00:18:38.641 "dhgroup": "ffdhe8192" 00:18:38.641 } 00:18:38.641 } 00:18:38.641 ]' 00:18:38.641 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.902 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.902 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.902 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:38.902 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.902 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.902 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.902 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.902 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:18:38.902 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:18:39.845 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.845 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.845 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.845 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.845 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.845 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.845 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:39.845 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:39.845 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:39.845 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.845 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.845 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:39.845 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:39.845 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.845 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.845 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.845 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.845 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.845 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.845 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.845 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.417 00:18:40.418 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.418 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.418 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.418 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.418 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.418 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.418 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.678 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.678 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.678 { 00:18:40.678 "cntlid": 139, 00:18:40.678 "qid": 0, 00:18:40.678 "state": "enabled", 00:18:40.678 "thread": "nvmf_tgt_poll_group_000", 00:18:40.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.678 "listen_address": { 00:18:40.678 "trtype": "TCP", 00:18:40.678 "adrfam": "IPv4", 00:18:40.678 "traddr": "10.0.0.2", 00:18:40.678 "trsvcid": "4420" 00:18:40.678 }, 00:18:40.678 "peer_address": { 00:18:40.678 "trtype": "TCP", 00:18:40.678 "adrfam": "IPv4", 00:18:40.678 "traddr": "10.0.0.1", 00:18:40.678 "trsvcid": "39460" 00:18:40.678 }, 00:18:40.678 "auth": { 00:18:40.678 "state": "completed", 00:18:40.678 "digest": "sha512", 00:18:40.679 "dhgroup": "ffdhe8192" 00:18:40.679 } 00:18:40.679 } 00:18:40.679 ]' 00:18:40.679 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.679 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.679 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.679 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.679 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.679 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.679 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.679 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.938 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:18:40.939 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: --dhchap-ctrl-secret DHHC-1:02:ODVjODZlYzc3OWM0MzY3NWU0MWRiNzFkYmRlNjFlMzZmZGM0YTY3ZTU2OGFiMTE1HFPB6A==: 00:18:41.509 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.509 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.509 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.509 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.509 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.509 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.509 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:41.509 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:41.769 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:41.770 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.770 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:41.770 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:41.770 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:41.770 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.770 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.770 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.770 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.770 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.770 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.770 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.770 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.342 00:18:42.342 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.342 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.342 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.342 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.342 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.342 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.342 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.342 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.342 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.342 { 00:18:42.342 "cntlid": 141, 00:18:42.342 "qid": 0, 00:18:42.342 "state": "enabled", 00:18:42.342 "thread": "nvmf_tgt_poll_group_000", 00:18:42.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:42.342 "listen_address": { 00:18:42.342 "trtype": "TCP", 00:18:42.342 "adrfam": "IPv4", 00:18:42.342 "traddr": "10.0.0.2", 00:18:42.342 "trsvcid": "4420" 00:18:42.342 }, 00:18:42.342 "peer_address": { 00:18:42.342 "trtype": "TCP", 00:18:42.342 "adrfam": "IPv4", 00:18:42.342 "traddr": "10.0.0.1", 00:18:42.342 "trsvcid": "39484" 00:18:42.342 }, 00:18:42.342 "auth": { 00:18:42.342 "state": "completed", 00:18:42.342 "digest": "sha512", 00:18:42.342 "dhgroup": "ffdhe8192" 00:18:42.342 } 00:18:42.342 } 00:18:42.342 ]' 00:18:42.342 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.342 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.342 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.603 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.603 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.603 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.603 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.603 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.603 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:18:42.603 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:01:MjFlOTMwZWZkY2YyMDRlMGZlZWMzOGQ3YmFkYjFhOTg2d2yc: 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.545 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.117 00:18:44.117 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.117 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.117 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.117 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.117 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.117 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.117 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.378 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.378 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.378 { 00:18:44.378 "cntlid": 143, 00:18:44.378 "qid": 0, 00:18:44.378 "state": "enabled", 00:18:44.378 "thread": "nvmf_tgt_poll_group_000", 00:18:44.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.378 "listen_address": { 00:18:44.378 "trtype": "TCP", 00:18:44.378 "adrfam": "IPv4", 00:18:44.378 "traddr": "10.0.0.2", 00:18:44.378 "trsvcid": "4420" 00:18:44.378 }, 00:18:44.378 "peer_address": { 00:18:44.378 "trtype": "TCP", 00:18:44.378 "adrfam": "IPv4", 00:18:44.378 "traddr": "10.0.0.1", 00:18:44.378 "trsvcid": "47552" 00:18:44.378 }, 00:18:44.378 "auth": { 00:18:44.378 "state": "completed", 00:18:44.378 "digest": "sha512", 00:18:44.378 "dhgroup": "ffdhe8192" 00:18:44.378 } 00:18:44.378 } 00:18:44.378 ]' 00:18:44.378 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.378 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.378 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.378 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:44.378 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.378 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.378 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.378 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.639 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:44.639 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:45.210 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.211 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.211 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.211 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.211 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.211 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:45.211 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:45.211 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:45.211 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:45.211 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:45.211 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:45.472 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:45.472 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.472 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:45.472 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:45.472 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:45.472 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.472 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.472 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.472 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.472 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.472 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.472 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.472 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.733 00:18:45.994 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.994 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.994 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.994 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.994 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.994 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.994 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.994 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.994 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.994 { 00:18:45.994 "cntlid": 145, 00:18:45.994 "qid": 0, 00:18:45.994 "state": "enabled", 00:18:45.994 "thread": "nvmf_tgt_poll_group_000", 00:18:45.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:45.994 "listen_address": { 00:18:45.994 "trtype": "TCP", 00:18:45.994 "adrfam": "IPv4", 00:18:45.994 "traddr": "10.0.0.2", 00:18:45.994 "trsvcid": "4420" 00:18:45.994 }, 00:18:45.994 "peer_address": { 00:18:45.994 "trtype": "TCP", 00:18:45.994 "adrfam": "IPv4", 00:18:45.994 "traddr": "10.0.0.1", 00:18:45.994 "trsvcid": "47564" 00:18:45.994 }, 00:18:45.994 "auth": { 00:18:45.994 "state": "completed", 00:18:45.994 "digest": "sha512", 00:18:45.994 "dhgroup": "ffdhe8192" 00:18:45.994 } 00:18:45.994 } 00:18:45.994 ]' 00:18:45.994 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.994 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.994 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.256 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:46.256 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.256 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.256 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.256 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.256 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:18:46.256 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2IyZjE4MDc4NWVkNjEwMGQzZWJlZTU1OWJhYjY4NThhNTE3MzcxYTFmOWExMGFl1Zkf4g==: --dhchap-ctrl-secret DHHC-1:03:Y2I1MDYxMTMxMThjZWM1NGEwYTZlOGVkNjk5NWYyMDBhYTY3ZjEzYzY1ZjMxMTE1YjgzZDk5NTRjNjRhNjFiNu3oJq4=: 00:18:47.196 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:47.197 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:47.459 request: 00:18:47.459 { 00:18:47.459 "name": "nvme0", 00:18:47.459 "trtype": "tcp", 00:18:47.459 "traddr": "10.0.0.2", 00:18:47.459 "adrfam": "ipv4", 00:18:47.459 "trsvcid": "4420", 00:18:47.459 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:47.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:47.459 "prchk_reftag": false, 00:18:47.459 "prchk_guard": false, 00:18:47.459 "hdgst": false, 00:18:47.459 "ddgst": false, 00:18:47.459 "dhchap_key": "key2", 00:18:47.459 "allow_unrecognized_csi": false, 00:18:47.459 "method": "bdev_nvme_attach_controller", 00:18:47.459 "req_id": 1 00:18:47.459 } 00:18:47.459 Got JSON-RPC error response 00:18:47.459 response: 00:18:47.459 { 00:18:47.459 "code": -5, 00:18:47.459 "message": "Input/output error" 00:18:47.459 } 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:47.459 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:48.030 request: 00:18:48.030 { 00:18:48.030 "name": "nvme0", 00:18:48.030 "trtype": "tcp", 00:18:48.030 "traddr": "10.0.0.2", 00:18:48.030 "adrfam": "ipv4", 00:18:48.030 "trsvcid": "4420", 00:18:48.030 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:48.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.030 "prchk_reftag": false, 00:18:48.030 "prchk_guard": false, 00:18:48.030 "hdgst": false, 00:18:48.030 "ddgst": false, 00:18:48.030 "dhchap_key": "key1", 00:18:48.030 "dhchap_ctrlr_key": "ckey2", 00:18:48.030 "allow_unrecognized_csi": false, 00:18:48.030 "method": "bdev_nvme_attach_controller", 00:18:48.030 "req_id": 1 00:18:48.030 } 00:18:48.030 Got JSON-RPC error response 00:18:48.030 response: 00:18:48.030 { 00:18:48.030 "code": -5, 00:18:48.030 "message": "Input/output error" 00:18:48.030 } 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.030 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.291 request: 00:18:48.291 { 00:18:48.291 "name": "nvme0", 00:18:48.291 "trtype": "tcp", 00:18:48.291 "traddr": "10.0.0.2", 00:18:48.291 "adrfam": "ipv4", 00:18:48.291 "trsvcid": "4420", 00:18:48.291 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:48.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.291 "prchk_reftag": false, 00:18:48.291 "prchk_guard": false, 00:18:48.291 "hdgst": false, 00:18:48.291 "ddgst": false, 00:18:48.291 "dhchap_key": "key1", 00:18:48.291 "dhchap_ctrlr_key": "ckey1", 00:18:48.291 "allow_unrecognized_csi": false, 00:18:48.291 "method": "bdev_nvme_attach_controller", 00:18:48.291 "req_id": 1 00:18:48.291 } 00:18:48.291 Got JSON-RPC error response 00:18:48.291 response: 00:18:48.291 { 00:18:48.291 "code": -5, 00:18:48.291 "message": "Input/output error" 00:18:48.291 } 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3489910 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3489910 ']' 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3489910 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3489910 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3489910' 00:18:48.552 killing process with pid 3489910 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3489910 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3489910 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3516062 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3516062 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3516062 ']' 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:48.552 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.494 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:49.494 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:49.494 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.494 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:49.494 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.494 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.494 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:49.494 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3516062 00:18:49.494 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3516062 ']' 00:18:49.494 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.494 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:49.494 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.494 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:49.494 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.755 null0 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cAU 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.QYF ]] 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QYF 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JhY 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.q2T ]] 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.q2T 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fC4 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.755 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.755 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.755 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ZFh ]] 00:18:49.755 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZFh 00:18:49.755 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.755 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.755 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.755 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:49.755 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.r0j 00:18:49.755 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.755 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.016 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.587 nvme0n1 00:18:50.587 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.587 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.587 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.848 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.848 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.848 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.848 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.848 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.848 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.848 { 00:18:50.848 "cntlid": 1, 00:18:50.848 "qid": 0, 00:18:50.848 "state": "enabled", 00:18:50.848 "thread": "nvmf_tgt_poll_group_000", 00:18:50.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:50.848 "listen_address": { 00:18:50.848 "trtype": "TCP", 00:18:50.848 "adrfam": "IPv4", 00:18:50.848 "traddr": "10.0.0.2", 00:18:50.848 "trsvcid": "4420" 00:18:50.848 }, 00:18:50.848 "peer_address": { 00:18:50.848 "trtype": "TCP", 00:18:50.848 "adrfam": "IPv4", 00:18:50.848 "traddr": "10.0.0.1", 00:18:50.848 "trsvcid": "47618" 00:18:50.848 }, 00:18:50.848 "auth": { 00:18:50.848 "state": "completed", 00:18:50.848 "digest": "sha512", 00:18:50.848 "dhgroup": "ffdhe8192" 00:18:50.848 } 00:18:50.848 } 00:18:50.848 ]' 00:18:50.848 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.848 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.848 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.848 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:50.848 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.848 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.848 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.848 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.109 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:51.109 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:51.680 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.941 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.941 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.941 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.941 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.941 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:51.941 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.941 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.941 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.941 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:51.941 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:51.941 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:51.941 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:51.941 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:51.941 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:51.941 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.941 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:51.941 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.941 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:51.941 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.941 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.201 request: 00:18:52.201 { 00:18:52.201 "name": "nvme0", 00:18:52.201 "trtype": "tcp", 00:18:52.201 "traddr": "10.0.0.2", 00:18:52.201 "adrfam": "ipv4", 00:18:52.201 "trsvcid": "4420", 00:18:52.201 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:52.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:52.201 "prchk_reftag": false, 00:18:52.201 "prchk_guard": false, 00:18:52.201 "hdgst": false, 00:18:52.201 "ddgst": false, 00:18:52.201 "dhchap_key": "key3", 00:18:52.201 "allow_unrecognized_csi": false, 00:18:52.201 "method": "bdev_nvme_attach_controller", 00:18:52.201 "req_id": 1 00:18:52.201 } 00:18:52.201 Got JSON-RPC error response 00:18:52.201 response: 00:18:52.201 { 00:18:52.201 "code": -5, 00:18:52.201 "message": "Input/output error" 00:18:52.201 } 00:18:52.201 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:52.202 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:52.202 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:52.202 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:52.202 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:52.202 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:52.202 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:52.202 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:52.461 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:52.461 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:52.461 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:52.461 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:52.461 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.461 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:52.461 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.461 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:52.461 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.461 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.461 request: 00:18:52.461 { 00:18:52.461 "name": "nvme0", 00:18:52.461 "trtype": "tcp", 00:18:52.462 "traddr": "10.0.0.2", 00:18:52.462 "adrfam": "ipv4", 00:18:52.462 "trsvcid": "4420", 00:18:52.462 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:52.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:52.462 "prchk_reftag": false, 00:18:52.462 "prchk_guard": false, 00:18:52.462 "hdgst": false, 00:18:52.462 "ddgst": false, 00:18:52.462 "dhchap_key": "key3", 00:18:52.462 "allow_unrecognized_csi": false, 00:18:52.462 "method": "bdev_nvme_attach_controller", 00:18:52.462 "req_id": 1 00:18:52.462 } 00:18:52.462 Got JSON-RPC error response 00:18:52.462 response: 00:18:52.462 { 00:18:52.462 "code": -5, 00:18:52.462 "message": "Input/output error" 00:18:52.462 } 00:18:52.462 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:52.462 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:52.462 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:52.462 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:52.462 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:52.462 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:52.462 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:52.462 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:52.462 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:52.462 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:52.722 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:52.983 request: 00:18:52.983 { 00:18:52.983 "name": "nvme0", 00:18:52.983 "trtype": "tcp", 00:18:52.983 "traddr": "10.0.0.2", 00:18:52.983 "adrfam": "ipv4", 00:18:52.983 "trsvcid": "4420", 00:18:52.983 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:52.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:52.983 "prchk_reftag": false, 00:18:52.983 "prchk_guard": false, 00:18:52.983 "hdgst": false, 00:18:52.983 "ddgst": false, 00:18:52.983 "dhchap_key": "key0", 00:18:52.983 "dhchap_ctrlr_key": "key1", 00:18:52.983 "allow_unrecognized_csi": false, 00:18:52.983 "method": "bdev_nvme_attach_controller", 00:18:52.983 "req_id": 1 00:18:52.983 } 00:18:52.983 Got JSON-RPC error response 00:18:52.983 response: 00:18:52.983 { 00:18:52.983 "code": -5, 00:18:52.983 "message": "Input/output error" 00:18:52.983 } 00:18:52.983 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:52.983 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:52.983 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:52.983 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:52.983 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:52.983 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:52.983 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:53.243 nvme0n1 00:18:53.243 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:53.243 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:53.243 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.504 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.504 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.504 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.765 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:53.765 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.765 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.765 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.765 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:53.765 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:53.765 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:54.337 nvme0n1 00:18:54.337 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:54.337 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:54.337 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.597 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.597 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:54.597 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.597 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.597 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.597 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:54.597 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.597 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:54.858 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.858 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:54.858 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: --dhchap-ctrl-secret DHHC-1:03:M2IyZmJjNzU3ZTE3MjMzYzdkMTg4ZTAyYmE1NTRhYWFjOTY1NTEwZGM2OTg1ODdmY2E0NTIwY2JiNDE2ZDUzYi7t4/M=: 00:18:55.428 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:55.428 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:55.428 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:55.428 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:55.428 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:55.428 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:55.428 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:55.428 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.428 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.689 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:55.690 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:55.690 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:55.690 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:55.690 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:55.690 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:55.690 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:55.690 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:55.690 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:55.690 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:56.263 request: 00:18:56.263 { 00:18:56.263 "name": "nvme0", 00:18:56.263 "trtype": "tcp", 00:18:56.263 "traddr": "10.0.0.2", 00:18:56.263 "adrfam": "ipv4", 00:18:56.263 "trsvcid": "4420", 00:18:56.263 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:56.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:56.263 "prchk_reftag": false, 00:18:56.263 "prchk_guard": false, 00:18:56.263 "hdgst": false, 00:18:56.263 "ddgst": false, 00:18:56.263 "dhchap_key": "key1", 00:18:56.263 "allow_unrecognized_csi": false, 00:18:56.263 "method": "bdev_nvme_attach_controller", 00:18:56.263 "req_id": 1 00:18:56.263 } 00:18:56.263 Got JSON-RPC error response 00:18:56.263 response: 00:18:56.263 { 00:18:56.263 "code": -5, 00:18:56.263 "message": "Input/output error" 00:18:56.263 } 00:18:56.263 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:56.263 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.263 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.263 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.263 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:56.263 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:56.263 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:56.834 nvme0n1 00:18:56.834 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:56.834 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:56.834 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.113 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.113 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.113 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.113 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.113 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.113 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.114 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.114 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:57.114 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:57.114 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:57.378 nvme0n1 00:18:57.378 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:57.378 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:57.378 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.638 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.638 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.638 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.899 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:57.899 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.899 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.899 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.899 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: '' 2s 00:18:57.899 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:57.899 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:57.899 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: 00:18:57.899 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:57.899 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:57.899 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:57.899 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: ]] 00:18:57.899 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZTA1NmIwNmZkZTEzMWYxMjZhYWM2MzRiNThlZjY0ZDJJrmRD: 00:18:57.899 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:57.899 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:57.899 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: 2s 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: ]] 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZWZhYzk5ZDI3MThhNTgxNTUzMzUzMmRkZTM2NGQ4OWU3NTkyZWYyNDdjNjAxOTQ5o0oi4g==: 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:59.921 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:01.836 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:01.836 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:19:01.836 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:01.836 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:01.836 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:01.836 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:19:01.836 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:19:01.836 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.096 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:02.096 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.096 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.096 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.096 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:02.097 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:02.097 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:02.668 nvme0n1 00:19:02.668 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:02.669 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.669 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.669 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.669 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:02.669 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:03.240 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:03.240 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:03.240 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.502 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.502 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.502 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.502 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.502 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.502 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:03.502 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:03.502 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:03.502 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:03.502 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.763 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.763 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:03.763 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.763 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.763 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.763 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:03.763 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:03.763 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:03.763 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:03.763 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.763 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:03.763 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.763 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:03.763 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:04.333 request: 00:19:04.333 { 00:19:04.333 "name": "nvme0", 00:19:04.333 "dhchap_key": "key1", 00:19:04.333 "dhchap_ctrlr_key": "key3", 00:19:04.333 "method": "bdev_nvme_set_keys", 00:19:04.333 "req_id": 1 00:19:04.333 } 00:19:04.333 Got JSON-RPC error response 00:19:04.333 response: 00:19:04.333 { 00:19:04.333 "code": -13, 00:19:04.333 "message": "Permission denied" 00:19:04.334 } 00:19:04.334 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:04.334 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:04.334 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:04.334 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:04.334 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:04.334 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:04.334 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.334 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:04.334 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:05.274 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:05.534 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:05.534 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.534 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:05.534 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:05.534 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.534 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.534 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.534 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:05.534 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:05.534 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:06.482 nvme0n1 00:19:06.482 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:06.482 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.482 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.482 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.482 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:06.482 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:06.482 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:06.482 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:06.482 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.482 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:06.482 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.482 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:06.482 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:06.742 request: 00:19:06.742 { 00:19:06.742 "name": "nvme0", 00:19:06.742 "dhchap_key": "key2", 00:19:06.742 "dhchap_ctrlr_key": "key0", 00:19:06.742 "method": "bdev_nvme_set_keys", 00:19:06.742 "req_id": 1 00:19:06.742 } 00:19:06.742 Got JSON-RPC error response 00:19:06.742 response: 00:19:06.742 { 00:19:06.742 "code": -13, 00:19:06.742 "message": "Permission denied" 00:19:06.742 } 00:19:06.742 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:06.742 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:06.742 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:06.742 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:06.742 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:06.742 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:06.742 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.009 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:07.009 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:07.955 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:07.955 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:07.955 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.223 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:08.223 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:08.223 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:08.223 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3490244 00:19:08.223 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3490244 ']' 00:19:08.223 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3490244 00:19:08.223 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:08.223 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:08.223 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3490244 00:19:08.223 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:08.223 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:08.223 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3490244' 00:19:08.223 killing process with pid 3490244 00:19:08.223 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3490244 00:19:08.223 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3490244 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:08.483 rmmod nvme_tcp 00:19:08.483 rmmod nvme_fabrics 00:19:08.483 rmmod nvme_keyring 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3516062 ']' 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3516062 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3516062 ']' 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3516062 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3516062 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3516062' 00:19:08.483 killing process with pid 3516062 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3516062 00:19:08.483 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3516062 00:19:08.744 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:08.744 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:08.744 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:08.744 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:08.744 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:08.744 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:08.744 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:08.744 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:08.744 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:08.744 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.744 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:08.744 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.655 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:10.655 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.cAU /tmp/spdk.key-sha256.JhY /tmp/spdk.key-sha384.fC4 /tmp/spdk.key-sha512.r0j /tmp/spdk.key-sha512.QYF /tmp/spdk.key-sha384.q2T /tmp/spdk.key-sha256.ZFh '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:10.656 00:19:10.656 real 2m36.151s 00:19:10.656 user 5m50.786s 00:19:10.656 sys 0m24.847s 00:19:10.656 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:10.656 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.656 ************************************ 00:19:10.656 END TEST nvmf_auth_target 00:19:10.656 ************************************ 00:19:10.916 07:17:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:10.916 07:17:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:10.916 07:17:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:10.916 07:17:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:10.916 07:17:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:10.916 ************************************ 00:19:10.916 START TEST nvmf_bdevio_no_huge 00:19:10.916 ************************************ 00:19:10.916 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:10.916 * Looking for test storage... 00:19:10.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:10.916 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:10.916 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:19:10.916 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:10.916 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:10.916 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.916 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.916 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:10.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.917 --rc genhtml_branch_coverage=1 00:19:10.917 --rc genhtml_function_coverage=1 00:19:10.917 --rc genhtml_legend=1 00:19:10.917 --rc geninfo_all_blocks=1 00:19:10.917 --rc geninfo_unexecuted_blocks=1 00:19:10.917 00:19:10.917 ' 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:10.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.917 --rc genhtml_branch_coverage=1 00:19:10.917 --rc genhtml_function_coverage=1 00:19:10.917 --rc genhtml_legend=1 00:19:10.917 --rc geninfo_all_blocks=1 00:19:10.917 --rc geninfo_unexecuted_blocks=1 00:19:10.917 00:19:10.917 ' 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:10.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.917 --rc genhtml_branch_coverage=1 00:19:10.917 --rc genhtml_function_coverage=1 00:19:10.917 --rc genhtml_legend=1 00:19:10.917 --rc geninfo_all_blocks=1 00:19:10.917 --rc geninfo_unexecuted_blocks=1 00:19:10.917 00:19:10.917 ' 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:10.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.917 --rc genhtml_branch_coverage=1 00:19:10.917 --rc genhtml_function_coverage=1 00:19:10.917 --rc genhtml_legend=1 00:19:10.917 --rc geninfo_all_blocks=1 00:19:10.917 --rc geninfo_unexecuted_blocks=1 00:19:10.917 00:19:10.917 ' 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.917 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:11.178 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.178 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.178 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.178 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.178 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.178 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.178 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:11.178 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:11.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:11.179 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.319 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:19.320 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:19.320 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:19.320 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:19.320 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:19.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:19:19.320 00:19:19.320 --- 10.0.0.2 ping statistics --- 00:19:19.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.320 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:19:19.320 00:19:19.320 --- 10.0.0.1 ping statistics --- 00:19:19.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.320 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:19.320 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3524217 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3524217 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 3524217 ']' 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:19.321 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:19.321 [2024-11-20 07:17:40.754144] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:19:19.321 [2024-11-20 07:17:40.754220] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:19.321 [2024-11-20 07:17:40.861055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:19.321 [2024-11-20 07:17:40.922097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.321 [2024-11-20 07:17:40.922145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.321 [2024-11-20 07:17:40.922154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.321 [2024-11-20 07:17:40.922171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.321 [2024-11-20 07:17:40.922178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.321 [2024-11-20 07:17:40.923998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:19.321 [2024-11-20 07:17:40.924156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:19.321 [2024-11-20 07:17:40.924315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:19.321 [2024-11-20 07:17:40.924415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:19.321 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:19.321 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:19:19.321 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:19.321 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:19.321 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:19.582 [2024-11-20 07:17:41.623971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:19.582 Malloc0 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:19.582 [2024-11-20 07:17:41.677787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:19.582 { 00:19:19.582 "params": { 00:19:19.582 "name": "Nvme$subsystem", 00:19:19.582 "trtype": "$TEST_TRANSPORT", 00:19:19.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.582 "adrfam": "ipv4", 00:19:19.582 "trsvcid": "$NVMF_PORT", 00:19:19.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.582 "hdgst": ${hdgst:-false}, 00:19:19.582 "ddgst": ${ddgst:-false} 00:19:19.582 }, 00:19:19.582 "method": "bdev_nvme_attach_controller" 00:19:19.582 } 00:19:19.582 EOF 00:19:19.582 )") 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:19.582 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:19.582 "params": { 00:19:19.582 "name": "Nvme1", 00:19:19.582 "trtype": "tcp", 00:19:19.582 "traddr": "10.0.0.2", 00:19:19.582 "adrfam": "ipv4", 00:19:19.582 "trsvcid": "4420", 00:19:19.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:19.582 "hdgst": false, 00:19:19.582 "ddgst": false 00:19:19.582 }, 00:19:19.582 "method": "bdev_nvme_attach_controller" 00:19:19.582 }' 00:19:19.582 [2024-11-20 07:17:41.737906] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:19:19.582 [2024-11-20 07:17:41.737978] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3524571 ] 00:19:19.582 [2024-11-20 07:17:41.834318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:19.843 [2024-11-20 07:17:41.894607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.843 [2024-11-20 07:17:41.894770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.843 [2024-11-20 07:17:41.894770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.104 I/O targets: 00:19:20.104 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:20.104 00:19:20.104 00:19:20.104 CUnit - A unit testing framework for C - Version 2.1-3 00:19:20.104 http://cunit.sourceforge.net/ 00:19:20.104 00:19:20.104 00:19:20.104 Suite: bdevio tests on: Nvme1n1 00:19:20.104 Test: blockdev write read block ...passed 00:19:20.104 Test: blockdev write zeroes read block ...passed 00:19:20.104 Test: blockdev write zeroes read no split ...passed 00:19:20.104 Test: blockdev write zeroes read split ...passed 00:19:20.365 Test: blockdev write zeroes read split partial ...passed 00:19:20.365 Test: blockdev reset ...[2024-11-20 07:17:42.424733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:20.365 [2024-11-20 07:17:42.424830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1536800 (9): Bad file descriptor 00:19:20.365 [2024-11-20 07:17:42.528272] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:20.365 passed 00:19:20.365 Test: blockdev write read 8 blocks ...passed 00:19:20.365 Test: blockdev write read size > 128k ...passed 00:19:20.365 Test: blockdev write read invalid size ...passed 00:19:20.626 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.626 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.626 Test: blockdev write read max offset ...passed 00:19:20.626 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.626 Test: blockdev writev readv 8 blocks ...passed 00:19:20.626 Test: blockdev writev readv 30 x 1block ...passed 00:19:20.626 Test: blockdev writev readv block ...passed 00:19:20.626 Test: blockdev writev readv size > 128k ...passed 00:19:20.626 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:20.626 Test: blockdev comparev and writev ...[2024-11-20 07:17:42.835548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:20.626 [2024-11-20 07:17:42.835596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.626 [2024-11-20 07:17:42.835613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:20.626 [2024-11-20 07:17:42.835623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:20.626 [2024-11-20 07:17:42.836197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:20.626 [2024-11-20 07:17:42.836211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:20.626 [2024-11-20 07:17:42.836226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:20.626 [2024-11-20 07:17:42.836234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:20.626 [2024-11-20 07:17:42.836745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:20.626 [2024-11-20 07:17:42.836758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:20.626 [2024-11-20 07:17:42.836772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:20.627 [2024-11-20 07:17:42.836780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:20.627 [2024-11-20 07:17:42.837339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:20.627 [2024-11-20 07:17:42.837354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:20.627 [2024-11-20 07:17:42.837368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:20.627 [2024-11-20 07:17:42.837376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:20.627 passed 00:19:20.887 Test: blockdev nvme passthru rw ...passed 00:19:20.887 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:17:42.922051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:20.887 [2024-11-20 07:17:42.922116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:20.887 [2024-11-20 07:17:42.922483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:20.887 [2024-11-20 07:17:42.922500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:20.887 [2024-11-20 07:17:42.922911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:20.887 [2024-11-20 07:17:42.922926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:20.887 [2024-11-20 07:17:42.923296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:20.887 [2024-11-20 07:17:42.923309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:20.887 passed 00:19:20.887 Test: blockdev nvme admin passthru ...passed 00:19:20.887 Test: blockdev copy ...passed 00:19:20.887 00:19:20.887 Run Summary: Type Total Ran Passed Failed Inactive 00:19:20.887 suites 1 1 n/a 0 0 00:19:20.887 tests 23 23 23 0 0 00:19:20.887 asserts 152 152 152 0 n/a 00:19:20.887 00:19:20.887 Elapsed time = 1.487 seconds 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:21.148 rmmod nvme_tcp 00:19:21.148 rmmod nvme_fabrics 00:19:21.148 rmmod nvme_keyring 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3524217 ']' 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3524217 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 3524217 ']' 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 3524217 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:21.148 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3524217 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3524217' 00:19:21.409 killing process with pid 3524217 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 3524217 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 3524217 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.409 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.953 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:23.953 00:19:23.953 real 0m12.774s 00:19:23.953 user 0m15.847s 00:19:23.953 sys 0m6.772s 00:19:23.953 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:23.953 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:23.953 ************************************ 00:19:23.953 END TEST nvmf_bdevio_no_huge 00:19:23.953 ************************************ 00:19:23.953 07:17:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:23.953 07:17:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:23.953 07:17:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:23.953 07:17:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:23.953 ************************************ 00:19:23.953 START TEST nvmf_tls 00:19:23.953 ************************************ 00:19:23.953 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:23.953 * Looking for test storage... 00:19:23.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:23.953 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:23.953 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:19:23.953 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:23.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.953 --rc genhtml_branch_coverage=1 00:19:23.953 --rc genhtml_function_coverage=1 00:19:23.953 --rc genhtml_legend=1 00:19:23.953 --rc geninfo_all_blocks=1 00:19:23.953 --rc geninfo_unexecuted_blocks=1 00:19:23.953 00:19:23.953 ' 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:23.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.953 --rc genhtml_branch_coverage=1 00:19:23.953 --rc genhtml_function_coverage=1 00:19:23.953 --rc genhtml_legend=1 00:19:23.953 --rc geninfo_all_blocks=1 00:19:23.953 --rc geninfo_unexecuted_blocks=1 00:19:23.953 00:19:23.953 ' 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:23.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.953 --rc genhtml_branch_coverage=1 00:19:23.953 --rc genhtml_function_coverage=1 00:19:23.953 --rc genhtml_legend=1 00:19:23.953 --rc geninfo_all_blocks=1 00:19:23.953 --rc geninfo_unexecuted_blocks=1 00:19:23.953 00:19:23.953 ' 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:23.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.953 --rc genhtml_branch_coverage=1 00:19:23.953 --rc genhtml_function_coverage=1 00:19:23.953 --rc genhtml_legend=1 00:19:23.953 --rc geninfo_all_blocks=1 00:19:23.953 --rc geninfo_unexecuted_blocks=1 00:19:23.953 00:19:23.953 ' 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.953 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:23.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:23.954 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.157 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:32.157 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:32.157 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:32.157 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:32.157 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:32.157 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:32.157 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:32.157 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:32.157 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:32.157 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:32.157 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:32.157 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:32.157 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:32.158 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:32.158 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:32.158 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:32.158 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:32.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:19:32.158 00:19:32.158 --- 10.0.0.2 ping statistics --- 00:19:32.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.158 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:19:32.158 00:19:32.158 --- 10.0.0.1 ping statistics --- 00:19:32.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.158 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.158 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3529029 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3529029 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3529029 ']' 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:32.159 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.159 [2024-11-20 07:17:53.665040] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:19:32.159 [2024-11-20 07:17:53.665109] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.159 [2024-11-20 07:17:53.754394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.159 [2024-11-20 07:17:53.805616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.159 [2024-11-20 07:17:53.805668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.159 [2024-11-20 07:17:53.805676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.159 [2024-11-20 07:17:53.805684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.159 [2024-11-20 07:17:53.805690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.159 [2024-11-20 07:17:53.806442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.419 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:32.419 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:32.419 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:32.419 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.419 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.419 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.419 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:32.419 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:32.419 true 00:19:32.680 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:32.680 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:32.680 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:32.680 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:32.680 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:32.940 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:32.940 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:33.202 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:33.202 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:33.202 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:33.463 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.463 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:33.463 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:33.463 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:33.463 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.463 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:33.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:33.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:33.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:34.107 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:34.107 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:34.107 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:34.107 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:34.107 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.98fprcuXE2 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.RtBJdQsYYt 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.98fprcuXE2 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.RtBJdQsYYt 00:19:34.389 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:34.650 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:34.911 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.98fprcuXE2 00:19:34.911 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.98fprcuXE2 00:19:34.911 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:34.911 [2024-11-20 07:17:57.184476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.172 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:35.172 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:35.432 [2024-11-20 07:17:57.501243] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.432 [2024-11-20 07:17:57.501467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.432 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:35.432 malloc0 00:19:35.432 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:35.693 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.98fprcuXE2 00:19:35.953 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:35.953 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.98fprcuXE2 00:19:48.190 Initializing NVMe Controllers 00:19:48.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:48.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:48.190 Initialization complete. Launching workers. 00:19:48.190 ======================================================== 00:19:48.190 Latency(us) 00:19:48.190 Device Information : IOPS MiB/s Average min max 00:19:48.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18598.00 72.65 3441.44 1246.43 4187.69 00:19:48.190 ======================================================== 00:19:48.190 Total : 18598.00 72.65 3441.44 1246.43 4187.69 00:19:48.190 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.98fprcuXE2 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.98fprcuXE2 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3532087 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3532087 /var/tmp/bdevperf.sock 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3532087 ']' 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:48.190 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.190 [2024-11-20 07:18:08.354547] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:19:48.190 [2024-11-20 07:18:08.354607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532087 ] 00:19:48.190 [2024-11-20 07:18:08.444193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.190 [2024-11-20 07:18:08.479857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.190 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:48.190 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:48.190 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.98fprcuXE2 00:19:48.190 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.190 [2024-11-20 07:18:09.455174] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.190 TLSTESTn1 00:19:48.190 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:48.190 Running I/O for 10 seconds... 00:19:49.394 5474.00 IOPS, 21.38 MiB/s [2024-11-20T06:18:13.056Z] 5373.00 IOPS, 20.99 MiB/s [2024-11-20T06:18:13.997Z] 5450.67 IOPS, 21.29 MiB/s [2024-11-20T06:18:14.938Z] 5312.75 IOPS, 20.75 MiB/s [2024-11-20T06:18:15.879Z] 5420.40 IOPS, 21.17 MiB/s [2024-11-20T06:18:16.818Z] 5499.83 IOPS, 21.48 MiB/s [2024-11-20T06:18:17.757Z] 5439.29 IOPS, 21.25 MiB/s [2024-11-20T06:18:18.697Z] 5399.62 IOPS, 21.09 MiB/s [2024-11-20T06:18:20.081Z] 5303.33 IOPS, 20.72 MiB/s [2024-11-20T06:18:20.081Z] 5332.60 IOPS, 20.83 MiB/s 00:19:57.803 Latency(us) 00:19:57.803 [2024-11-20T06:18:20.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.803 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:57.803 Verification LBA range: start 0x0 length 0x2000 00:19:57.803 TLSTESTn1 : 10.01 5338.88 20.86 0.00 0.00 23940.41 5133.65 37137.07 00:19:57.803 [2024-11-20T06:18:20.081Z] =================================================================================================================== 00:19:57.803 [2024-11-20T06:18:20.081Z] Total : 5338.88 20.86 0.00 0.00 23940.41 5133.65 37137.07 00:19:57.803 { 00:19:57.803 "results": [ 00:19:57.803 { 00:19:57.803 "job": "TLSTESTn1", 00:19:57.803 "core_mask": "0x4", 00:19:57.803 "workload": "verify", 00:19:57.803 "status": "finished", 00:19:57.803 "verify_range": { 00:19:57.803 "start": 0, 00:19:57.803 "length": 8192 00:19:57.803 }, 00:19:57.803 "queue_depth": 128, 00:19:57.803 "io_size": 4096, 00:19:57.803 "runtime": 10.012022, 00:19:57.803 "iops": 5338.881596544634, 00:19:57.803 "mibps": 20.855006236502476, 00:19:57.803 "io_failed": 0, 00:19:57.803 "io_timeout": 0, 00:19:57.803 "avg_latency_us": 23940.40518785974, 00:19:57.803 "min_latency_us": 5133.653333333334, 00:19:57.803 "max_latency_us": 37137.066666666666 00:19:57.803 } 00:19:57.803 ], 00:19:57.803 "core_count": 1 00:19:57.803 } 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3532087 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3532087 ']' 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3532087 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3532087 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3532087' 00:19:57.803 killing process with pid 3532087 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3532087 00:19:57.803 Received shutdown signal, test time was about 10.000000 seconds 00:19:57.803 00:19:57.803 Latency(us) 00:19:57.803 [2024-11-20T06:18:20.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.803 [2024-11-20T06:18:20.081Z] =================================================================================================================== 00:19:57.803 [2024-11-20T06:18:20.081Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3532087 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RtBJdQsYYt 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RtBJdQsYYt 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RtBJdQsYYt 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RtBJdQsYYt 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3534776 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3534776 /var/tmp/bdevperf.sock 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3534776 ']' 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:57.803 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.803 [2024-11-20 07:18:19.924217] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:19:57.804 [2024-11-20 07:18:19.924274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3534776 ] 00:19:57.804 [2024-11-20 07:18:20.009067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.804 [2024-11-20 07:18:20.038228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.745 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:58.745 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:58.745 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RtBJdQsYYt 00:19:58.745 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:59.006 [2024-11-20 07:18:21.061048] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.006 [2024-11-20 07:18:21.071720] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:59.006 [2024-11-20 07:18:21.072168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3fbb0 (107): Transport endpoint is not connected 00:19:59.006 [2024-11-20 07:18:21.073163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3fbb0 (9): Bad file descriptor 00:19:59.006 [2024-11-20 07:18:21.074165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:59.006 [2024-11-20 07:18:21.074173] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:59.006 [2024-11-20 07:18:21.074179] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:59.006 [2024-11-20 07:18:21.074186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:59.006 request: 00:19:59.006 { 00:19:59.006 "name": "TLSTEST", 00:19:59.006 "trtype": "tcp", 00:19:59.006 "traddr": "10.0.0.2", 00:19:59.006 "adrfam": "ipv4", 00:19:59.006 "trsvcid": "4420", 00:19:59.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.006 "prchk_reftag": false, 00:19:59.006 "prchk_guard": false, 00:19:59.006 "hdgst": false, 00:19:59.006 "ddgst": false, 00:19:59.006 "psk": "key0", 00:19:59.006 "allow_unrecognized_csi": false, 00:19:59.006 "method": "bdev_nvme_attach_controller", 00:19:59.006 "req_id": 1 00:19:59.006 } 00:19:59.006 Got JSON-RPC error response 00:19:59.006 response: 00:19:59.006 { 00:19:59.006 "code": -5, 00:19:59.006 "message": "Input/output error" 00:19:59.006 } 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3534776 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3534776 ']' 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3534776 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3534776 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3534776' 00:19:59.006 killing process with pid 3534776 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3534776 00:19:59.006 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.006 00:19:59.006 Latency(us) 00:19:59.006 [2024-11-20T06:18:21.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.006 [2024-11-20T06:18:21.284Z] =================================================================================================================== 00:19:59.006 [2024-11-20T06:18:21.284Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3534776 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.98fprcuXE2 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.98fprcuXE2 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.98fprcuXE2 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:59.006 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:59.007 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.98fprcuXE2 00:19:59.007 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:59.007 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3534979 00:19:59.007 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:59.007 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3534979 /var/tmp/bdevperf.sock 00:19:59.007 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:59.007 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3534979 ']' 00:19:59.007 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.007 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:59.007 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.007 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:59.007 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.267 [2024-11-20 07:18:21.322821] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:19:59.267 [2024-11-20 07:18:21.322878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3534979 ] 00:19:59.267 [2024-11-20 07:18:21.405497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.267 [2024-11-20 07:18:21.434484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.838 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:59.838 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:59.838 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.98fprcuXE2 00:20:00.098 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:00.359 [2024-11-20 07:18:22.441015] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.359 [2024-11-20 07:18:22.445434] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:00.359 [2024-11-20 07:18:22.445456] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:00.359 [2024-11-20 07:18:22.445475] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:00.359 [2024-11-20 07:18:22.446116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd58bb0 (107): Transport endpoint is not connected 00:20:00.359 [2024-11-20 07:18:22.447111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd58bb0 (9): Bad file descriptor 00:20:00.359 [2024-11-20 07:18:22.448113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:00.359 [2024-11-20 07:18:22.448121] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:00.359 [2024-11-20 07:18:22.448127] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:00.359 [2024-11-20 07:18:22.448135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:00.359 request: 00:20:00.359 { 00:20:00.359 "name": "TLSTEST", 00:20:00.359 "trtype": "tcp", 00:20:00.359 "traddr": "10.0.0.2", 00:20:00.359 "adrfam": "ipv4", 00:20:00.359 "trsvcid": "4420", 00:20:00.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.359 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:00.359 "prchk_reftag": false, 00:20:00.359 "prchk_guard": false, 00:20:00.359 "hdgst": false, 00:20:00.359 "ddgst": false, 00:20:00.359 "psk": "key0", 00:20:00.359 "allow_unrecognized_csi": false, 00:20:00.359 "method": "bdev_nvme_attach_controller", 00:20:00.359 "req_id": 1 00:20:00.359 } 00:20:00.359 Got JSON-RPC error response 00:20:00.359 response: 00:20:00.359 { 00:20:00.359 "code": -5, 00:20:00.359 "message": "Input/output error" 00:20:00.359 } 00:20:00.359 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3534979 00:20:00.359 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3534979 ']' 00:20:00.359 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3534979 00:20:00.359 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:00.359 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:00.359 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3534979 00:20:00.359 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:00.359 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:00.359 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3534979' 00:20:00.359 killing process with pid 3534979 00:20:00.359 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3534979 00:20:00.359 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.359 00:20:00.359 Latency(us) 00:20:00.359 [2024-11-20T06:18:22.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.359 [2024-11-20T06:18:22.637Z] =================================================================================================================== 00:20:00.359 [2024-11-20T06:18:22.637Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.359 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3534979 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.98fprcuXE2 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.98fprcuXE2 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.98fprcuXE2 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.98fprcuXE2 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3535255 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3535255 /var/tmp/bdevperf.sock 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3535255 ']' 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:00.620 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.620 [2024-11-20 07:18:22.691593] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:00.620 [2024-11-20 07:18:22.691646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3535255 ] 00:20:00.620 [2024-11-20 07:18:22.774830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.620 [2024-11-20 07:18:22.802205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.560 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:01.560 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:01.561 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.98fprcuXE2 00:20:01.561 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:01.561 [2024-11-20 07:18:23.824651] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.561 [2024-11-20 07:18:23.829140] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:01.561 [2024-11-20 07:18:23.829165] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:01.561 [2024-11-20 07:18:23.829185] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:01.561 [2024-11-20 07:18:23.829868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x692bb0 (107): Transport endpoint is not connected 00:20:01.561 [2024-11-20 07:18:23.830863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x692bb0 (9): Bad file descriptor 00:20:01.561 [2024-11-20 07:18:23.831864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:01.561 [2024-11-20 07:18:23.831872] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:01.561 [2024-11-20 07:18:23.831878] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:01.561 [2024-11-20 07:18:23.831885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:01.561 request: 00:20:01.561 { 00:20:01.561 "name": "TLSTEST", 00:20:01.561 "trtype": "tcp", 00:20:01.561 "traddr": "10.0.0.2", 00:20:01.561 "adrfam": "ipv4", 00:20:01.561 "trsvcid": "4420", 00:20:01.561 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:01.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.561 "prchk_reftag": false, 00:20:01.561 "prchk_guard": false, 00:20:01.561 "hdgst": false, 00:20:01.561 "ddgst": false, 00:20:01.561 "psk": "key0", 00:20:01.561 "allow_unrecognized_csi": false, 00:20:01.561 "method": "bdev_nvme_attach_controller", 00:20:01.561 "req_id": 1 00:20:01.561 } 00:20:01.561 Got JSON-RPC error response 00:20:01.561 response: 00:20:01.561 { 00:20:01.561 "code": -5, 00:20:01.561 "message": "Input/output error" 00:20:01.561 } 00:20:01.821 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3535255 00:20:01.821 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3535255 ']' 00:20:01.821 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3535255 00:20:01.821 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:01.821 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:01.821 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3535255 00:20:01.821 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:01.821 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:01.821 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3535255' 00:20:01.821 killing process with pid 3535255 00:20:01.821 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3535255 00:20:01.821 Received shutdown signal, test time was about 10.000000 seconds 00:20:01.821 00:20:01.821 Latency(us) 00:20:01.821 [2024-11-20T06:18:24.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.821 [2024-11-20T06:18:24.099Z] =================================================================================================================== 00:20:01.821 [2024-11-20T06:18:24.099Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:01.821 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3535255 00:20:01.821 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:01.821 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:01.821 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:01.821 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:01.821 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:01.821 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:01.821 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:01.821 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:01.821 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:01.821 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.821 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3535595 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3535595 /var/tmp/bdevperf.sock 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3535595 ']' 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:01.822 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.822 [2024-11-20 07:18:24.070537] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:01.822 [2024-11-20 07:18:24.070591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3535595 ] 00:20:02.082 [2024-11-20 07:18:24.154753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.083 [2024-11-20 07:18:24.182677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.654 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:02.654 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:02.654 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:02.915 [2024-11-20 07:18:25.024394] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:02.915 [2024-11-20 07:18:25.024418] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:02.915 request: 00:20:02.915 { 00:20:02.915 "name": "key0", 00:20:02.915 "path": "", 00:20:02.915 "method": "keyring_file_add_key", 00:20:02.915 "req_id": 1 00:20:02.915 } 00:20:02.915 Got JSON-RPC error response 00:20:02.915 response: 00:20:02.915 { 00:20:02.915 "code": -1, 00:20:02.915 "message": "Operation not permitted" 00:20:02.915 } 00:20:02.915 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:03.176 [2024-11-20 07:18:25.208936] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:03.176 [2024-11-20 07:18:25.208960] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:03.176 request: 00:20:03.176 { 00:20:03.176 "name": "TLSTEST", 00:20:03.176 "trtype": "tcp", 00:20:03.176 "traddr": "10.0.0.2", 00:20:03.176 "adrfam": "ipv4", 00:20:03.176 "trsvcid": "4420", 00:20:03.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.176 "prchk_reftag": false, 00:20:03.176 "prchk_guard": false, 00:20:03.176 "hdgst": false, 00:20:03.176 "ddgst": false, 00:20:03.176 "psk": "key0", 00:20:03.176 "allow_unrecognized_csi": false, 00:20:03.176 "method": "bdev_nvme_attach_controller", 00:20:03.176 "req_id": 1 00:20:03.176 } 00:20:03.176 Got JSON-RPC error response 00:20:03.176 response: 00:20:03.176 { 00:20:03.176 "code": -126, 00:20:03.176 "message": "Required key not available" 00:20:03.176 } 00:20:03.176 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3535595 00:20:03.176 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3535595 ']' 00:20:03.176 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3535595 00:20:03.176 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:03.176 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:03.176 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3535595 00:20:03.176 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:03.176 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:03.176 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3535595' 00:20:03.176 killing process with pid 3535595 00:20:03.176 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3535595 00:20:03.176 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.176 00:20:03.176 Latency(us) 00:20:03.176 [2024-11-20T06:18:25.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.176 [2024-11-20T06:18:25.454Z] =================================================================================================================== 00:20:03.176 [2024-11-20T06:18:25.454Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:03.176 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3535595 00:20:03.176 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:03.176 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:03.176 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:03.177 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:03.177 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:03.177 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3529029 00:20:03.177 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3529029 ']' 00:20:03.177 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3529029 00:20:03.177 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:03.177 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:03.177 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3529029 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3529029' 00:20:03.438 killing process with pid 3529029 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3529029 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3529029 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.1WkDES6AI2 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.1WkDES6AI2 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3535953 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3535953 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3535953 ']' 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:03.438 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.438 [2024-11-20 07:18:25.690572] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:03.438 [2024-11-20 07:18:25.690634] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.698 [2024-11-20 07:18:25.783853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.698 [2024-11-20 07:18:25.815590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.698 [2024-11-20 07:18:25.815622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.698 [2024-11-20 07:18:25.815627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.698 [2024-11-20 07:18:25.815632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.698 [2024-11-20 07:18:25.815636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.698 [2024-11-20 07:18:25.816133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.268 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:04.268 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:04.269 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.269 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:04.269 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.269 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.269 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.1WkDES6AI2 00:20:04.269 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.1WkDES6AI2 00:20:04.269 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:04.529 [2024-11-20 07:18:26.681422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.529 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:04.789 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:04.789 [2024-11-20 07:18:27.018252] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:04.789 [2024-11-20 07:18:27.018470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.789 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:05.050 malloc0 00:20:05.050 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:05.310 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.1WkDES6AI2 00:20:05.310 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1WkDES6AI2 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1WkDES6AI2 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3536334 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3536334 /var/tmp/bdevperf.sock 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3536334 ']' 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:05.570 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.570 [2024-11-20 07:18:27.714742] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:05.571 [2024-11-20 07:18:27.714793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3536334 ] 00:20:05.571 [2024-11-20 07:18:27.801130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.571 [2024-11-20 07:18:27.830378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.831 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:05.831 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:05.831 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1WkDES6AI2 00:20:05.831 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:06.092 [2024-11-20 07:18:28.231406] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.092 TLSTESTn1 00:20:06.092 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:06.352 Running I/O for 10 seconds... 00:20:08.231 5781.00 IOPS, 22.58 MiB/s [2024-11-20T06:18:31.451Z] 5934.50 IOPS, 23.18 MiB/s [2024-11-20T06:18:32.833Z] 5968.33 IOPS, 23.31 MiB/s [2024-11-20T06:18:33.775Z] 6071.25 IOPS, 23.72 MiB/s [2024-11-20T06:18:34.717Z] 6110.20 IOPS, 23.87 MiB/s [2024-11-20T06:18:35.674Z] 6083.33 IOPS, 23.76 MiB/s [2024-11-20T06:18:36.615Z] 6047.57 IOPS, 23.62 MiB/s [2024-11-20T06:18:37.554Z] 6063.88 IOPS, 23.69 MiB/s [2024-11-20T06:18:38.495Z] 6000.67 IOPS, 23.44 MiB/s [2024-11-20T06:18:38.495Z] 5937.50 IOPS, 23.19 MiB/s 00:20:16.217 Latency(us) 00:20:16.217 [2024-11-20T06:18:38.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.217 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:16.217 Verification LBA range: start 0x0 length 0x2000 00:20:16.217 TLSTESTn1 : 10.01 5941.86 23.21 0.00 0.00 21510.44 5379.41 70341.97 00:20:16.217 [2024-11-20T06:18:38.495Z] =================================================================================================================== 00:20:16.217 [2024-11-20T06:18:38.495Z] Total : 5941.86 23.21 0.00 0.00 21510.44 5379.41 70341.97 00:20:16.217 { 00:20:16.217 "results": [ 00:20:16.217 { 00:20:16.217 "job": "TLSTESTn1", 00:20:16.217 "core_mask": "0x4", 00:20:16.217 "workload": "verify", 00:20:16.217 "status": "finished", 00:20:16.217 "verify_range": { 00:20:16.217 "start": 0, 00:20:16.217 "length": 8192 00:20:16.217 }, 00:20:16.217 "queue_depth": 128, 00:20:16.217 "io_size": 4096, 00:20:16.217 "runtime": 10.014029, 00:20:16.217 "iops": 5941.864158771659, 00:20:16.217 "mibps": 23.210406870201794, 00:20:16.217 "io_failed": 0, 00:20:16.217 "io_timeout": 0, 00:20:16.217 "avg_latency_us": 21510.44140320213, 00:20:16.217 "min_latency_us": 5379.413333333333, 00:20:16.217 "max_latency_us": 70341.97333333333 00:20:16.217 } 00:20:16.217 ], 00:20:16.217 "core_count": 1 00:20:16.217 } 00:20:16.217 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:16.217 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3536334 00:20:16.217 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3536334 ']' 00:20:16.217 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3536334 00:20:16.217 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:16.217 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:16.217 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3536334 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3536334' 00:20:16.478 killing process with pid 3536334 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3536334 00:20:16.478 Received shutdown signal, test time was about 10.000000 seconds 00:20:16.478 00:20:16.478 Latency(us) 00:20:16.478 [2024-11-20T06:18:38.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.478 [2024-11-20T06:18:38.756Z] =================================================================================================================== 00:20:16.478 [2024-11-20T06:18:38.756Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3536334 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.1WkDES6AI2 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1WkDES6AI2 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1WkDES6AI2 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1WkDES6AI2 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1WkDES6AI2 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3538416 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3538416 /var/tmp/bdevperf.sock 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3538416 ']' 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:16.478 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.478 [2024-11-20 07:18:38.701611] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:16.478 [2024-11-20 07:18:38.701668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3538416 ] 00:20:16.740 [2024-11-20 07:18:38.785251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.740 [2024-11-20 07:18:38.814114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.312 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:17.312 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:17.312 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1WkDES6AI2 00:20:17.573 [2024-11-20 07:18:39.648377] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.1WkDES6AI2': 0100666 00:20:17.573 [2024-11-20 07:18:39.648399] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:17.573 request: 00:20:17.573 { 00:20:17.573 "name": "key0", 00:20:17.573 "path": "/tmp/tmp.1WkDES6AI2", 00:20:17.573 "method": "keyring_file_add_key", 00:20:17.573 "req_id": 1 00:20:17.573 } 00:20:17.573 Got JSON-RPC error response 00:20:17.573 response: 00:20:17.573 { 00:20:17.573 "code": -1, 00:20:17.573 "message": "Operation not permitted" 00:20:17.573 } 00:20:17.573 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:17.573 [2024-11-20 07:18:39.832905] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.573 [2024-11-20 07:18:39.832929] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:17.573 request: 00:20:17.573 { 00:20:17.573 "name": "TLSTEST", 00:20:17.573 "trtype": "tcp", 00:20:17.573 "traddr": "10.0.0.2", 00:20:17.573 "adrfam": "ipv4", 00:20:17.573 "trsvcid": "4420", 00:20:17.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.573 "prchk_reftag": false, 00:20:17.573 "prchk_guard": false, 00:20:17.573 "hdgst": false, 00:20:17.573 "ddgst": false, 00:20:17.573 "psk": "key0", 00:20:17.573 "allow_unrecognized_csi": false, 00:20:17.573 "method": "bdev_nvme_attach_controller", 00:20:17.573 "req_id": 1 00:20:17.573 } 00:20:17.573 Got JSON-RPC error response 00:20:17.573 response: 00:20:17.573 { 00:20:17.573 "code": -126, 00:20:17.573 "message": "Required key not available" 00:20:17.573 } 00:20:17.834 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3538416 00:20:17.834 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3538416 ']' 00:20:17.834 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3538416 00:20:17.834 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:17.834 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:17.834 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3538416 00:20:17.834 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:17.834 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:17.834 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3538416' 00:20:17.834 killing process with pid 3538416 00:20:17.834 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3538416 00:20:17.834 Received shutdown signal, test time was about 10.000000 seconds 00:20:17.834 00:20:17.834 Latency(us) 00:20:17.834 [2024-11-20T06:18:40.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.834 [2024-11-20T06:18:40.112Z] =================================================================================================================== 00:20:17.834 [2024-11-20T06:18:40.112Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:17.834 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3538416 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3535953 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3535953 ']' 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3535953 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3535953 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3535953' 00:20:17.834 killing process with pid 3535953 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3535953 00:20:17.834 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3535953 00:20:18.095 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:18.095 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:18.095 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:18.095 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.095 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3538700 00:20:18.095 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3538700 00:20:18.095 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:18.095 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3538700 ']' 00:20:18.095 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.095 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:18.095 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.096 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:18.096 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.096 [2024-11-20 07:18:40.267887] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:18.096 [2024-11-20 07:18:40.267972] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.096 [2024-11-20 07:18:40.361381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.356 [2024-11-20 07:18:40.391288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.356 [2024-11-20 07:18:40.391317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.356 [2024-11-20 07:18:40.391323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.356 [2024-11-20 07:18:40.391328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.356 [2024-11-20 07:18:40.391333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.356 [2024-11-20 07:18:40.391778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.1WkDES6AI2 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.1WkDES6AI2 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.1WkDES6AI2 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.1WkDES6AI2 00:20:18.928 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:19.189 [2024-11-20 07:18:41.239804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.189 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:19.189 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:19.450 [2024-11-20 07:18:41.608710] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:19.450 [2024-11-20 07:18:41.608935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.450 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:19.711 malloc0 00:20:19.711 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:19.971 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.1WkDES6AI2 00:20:19.971 [2024-11-20 07:18:42.139660] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.1WkDES6AI2': 0100666 00:20:19.971 [2024-11-20 07:18:42.139681] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:19.971 request: 00:20:19.971 { 00:20:19.971 "name": "key0", 00:20:19.971 "path": "/tmp/tmp.1WkDES6AI2", 00:20:19.971 "method": "keyring_file_add_key", 00:20:19.971 "req_id": 1 00:20:19.971 } 00:20:19.971 Got JSON-RPC error response 00:20:19.971 response: 00:20:19.971 { 00:20:19.971 "code": -1, 00:20:19.971 "message": "Operation not permitted" 00:20:19.971 } 00:20:19.971 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:20.232 [2024-11-20 07:18:42.324130] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:20.233 [2024-11-20 07:18:42.324155] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:20.233 request: 00:20:20.233 { 00:20:20.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.233 "host": "nqn.2016-06.io.spdk:host1", 00:20:20.233 "psk": "key0", 00:20:20.233 "method": "nvmf_subsystem_add_host", 00:20:20.233 "req_id": 1 00:20:20.233 } 00:20:20.233 Got JSON-RPC error response 00:20:20.233 response: 00:20:20.233 { 00:20:20.233 "code": -32603, 00:20:20.233 "message": "Internal error" 00:20:20.233 } 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3538700 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3538700 ']' 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3538700 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3538700 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3538700' 00:20:20.233 killing process with pid 3538700 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3538700 00:20:20.233 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3538700 00:20:20.494 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.1WkDES6AI2 00:20:20.494 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:20.494 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:20.494 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:20.494 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.494 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3539349 00:20:20.494 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3539349 00:20:20.494 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:20.494 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3539349 ']' 00:20:20.494 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.494 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:20.494 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.494 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:20.494 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.494 [2024-11-20 07:18:42.596105] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:20.494 [2024-11-20 07:18:42.596156] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.494 [2024-11-20 07:18:42.685813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.494 [2024-11-20 07:18:42.713420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.494 [2024-11-20 07:18:42.713451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.494 [2024-11-20 07:18:42.713457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.494 [2024-11-20 07:18:42.713462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.494 [2024-11-20 07:18:42.713466] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.494 [2024-11-20 07:18:42.713905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.438 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:21.438 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:21.438 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:21.438 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:21.438 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.438 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.438 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.1WkDES6AI2 00:20:21.438 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.1WkDES6AI2 00:20:21.438 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:21.438 [2024-11-20 07:18:43.589125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.438 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:21.698 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:21.698 [2024-11-20 07:18:43.954023] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.698 [2024-11-20 07:18:43.954240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.959 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:21.959 malloc0 00:20:21.959 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:22.219 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.1WkDES6AI2 00:20:22.480 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:22.480 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:22.480 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3539756 00:20:22.480 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:22.480 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3539756 /var/tmp/bdevperf.sock 00:20:22.480 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3539756 ']' 00:20:22.480 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.480 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:22.480 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.480 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:22.480 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.480 [2024-11-20 07:18:44.741199] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:22.480 [2024-11-20 07:18:44.741266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539756 ] 00:20:22.741 [2024-11-20 07:18:44.824682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.741 [2024-11-20 07:18:44.853459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.313 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:23.313 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:23.313 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1WkDES6AI2 00:20:23.573 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:23.834 [2024-11-20 07:18:45.880133] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.834 TLSTESTn1 00:20:23.834 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:24.094 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:24.094 "subsystems": [ 00:20:24.094 { 00:20:24.094 "subsystem": "keyring", 00:20:24.094 "config": [ 00:20:24.094 { 00:20:24.094 "method": "keyring_file_add_key", 00:20:24.094 "params": { 00:20:24.094 "name": "key0", 00:20:24.094 "path": "/tmp/tmp.1WkDES6AI2" 00:20:24.094 } 00:20:24.094 } 00:20:24.094 ] 00:20:24.094 }, 00:20:24.094 { 00:20:24.094 "subsystem": "iobuf", 00:20:24.094 "config": [ 00:20:24.094 { 00:20:24.094 "method": "iobuf_set_options", 00:20:24.094 "params": { 00:20:24.094 "small_pool_count": 8192, 00:20:24.094 "large_pool_count": 1024, 00:20:24.094 "small_bufsize": 8192, 00:20:24.094 "large_bufsize": 135168, 00:20:24.094 "enable_numa": false 00:20:24.094 } 00:20:24.094 } 00:20:24.094 ] 00:20:24.094 }, 00:20:24.094 { 00:20:24.094 "subsystem": "sock", 00:20:24.094 "config": [ 00:20:24.094 { 00:20:24.094 "method": "sock_set_default_impl", 00:20:24.094 "params": { 00:20:24.094 "impl_name": "posix" 00:20:24.094 } 00:20:24.094 }, 00:20:24.094 { 00:20:24.094 "method": "sock_impl_set_options", 00:20:24.094 "params": { 00:20:24.094 "impl_name": "ssl", 00:20:24.094 "recv_buf_size": 4096, 00:20:24.094 "send_buf_size": 4096, 00:20:24.094 "enable_recv_pipe": true, 00:20:24.094 "enable_quickack": false, 00:20:24.094 "enable_placement_id": 0, 00:20:24.094 "enable_zerocopy_send_server": true, 00:20:24.094 "enable_zerocopy_send_client": false, 00:20:24.094 "zerocopy_threshold": 0, 00:20:24.094 "tls_version": 0, 00:20:24.094 "enable_ktls": false 00:20:24.094 } 00:20:24.094 }, 00:20:24.094 { 00:20:24.094 "method": "sock_impl_set_options", 00:20:24.094 "params": { 00:20:24.094 "impl_name": "posix", 00:20:24.094 "recv_buf_size": 2097152, 00:20:24.094 "send_buf_size": 2097152, 00:20:24.094 "enable_recv_pipe": true, 00:20:24.094 "enable_quickack": false, 00:20:24.094 "enable_placement_id": 0, 00:20:24.094 "enable_zerocopy_send_server": true, 00:20:24.094 "enable_zerocopy_send_client": false, 00:20:24.094 "zerocopy_threshold": 0, 00:20:24.094 "tls_version": 0, 00:20:24.094 "enable_ktls": false 00:20:24.094 } 00:20:24.094 } 00:20:24.094 ] 00:20:24.094 }, 00:20:24.094 { 00:20:24.094 "subsystem": "vmd", 00:20:24.094 "config": [] 00:20:24.094 }, 00:20:24.094 { 00:20:24.094 "subsystem": "accel", 00:20:24.094 "config": [ 00:20:24.094 { 00:20:24.094 "method": "accel_set_options", 00:20:24.094 "params": { 00:20:24.094 "small_cache_size": 128, 00:20:24.094 "large_cache_size": 16, 00:20:24.094 "task_count": 2048, 00:20:24.094 "sequence_count": 2048, 00:20:24.094 "buf_count": 2048 00:20:24.094 } 00:20:24.094 } 00:20:24.094 ] 00:20:24.094 }, 00:20:24.094 { 00:20:24.094 "subsystem": "bdev", 00:20:24.094 "config": [ 00:20:24.094 { 00:20:24.094 "method": "bdev_set_options", 00:20:24.094 "params": { 00:20:24.094 "bdev_io_pool_size": 65535, 00:20:24.094 "bdev_io_cache_size": 256, 00:20:24.094 "bdev_auto_examine": true, 00:20:24.094 "iobuf_small_cache_size": 128, 00:20:24.094 "iobuf_large_cache_size": 16 00:20:24.094 } 00:20:24.094 }, 00:20:24.094 { 00:20:24.094 "method": "bdev_raid_set_options", 00:20:24.094 "params": { 00:20:24.094 "process_window_size_kb": 1024, 00:20:24.094 "process_max_bandwidth_mb_sec": 0 00:20:24.094 } 00:20:24.094 }, 00:20:24.094 { 00:20:24.094 "method": "bdev_iscsi_set_options", 00:20:24.094 "params": { 00:20:24.094 "timeout_sec": 30 00:20:24.094 } 00:20:24.094 }, 00:20:24.094 { 00:20:24.094 "method": "bdev_nvme_set_options", 00:20:24.094 "params": { 00:20:24.094 "action_on_timeout": "none", 00:20:24.094 "timeout_us": 0, 00:20:24.094 "timeout_admin_us": 0, 00:20:24.094 "keep_alive_timeout_ms": 10000, 00:20:24.094 "arbitration_burst": 0, 00:20:24.094 "low_priority_weight": 0, 00:20:24.094 "medium_priority_weight": 0, 00:20:24.094 "high_priority_weight": 0, 00:20:24.094 "nvme_adminq_poll_period_us": 10000, 00:20:24.094 "nvme_ioq_poll_period_us": 0, 00:20:24.094 "io_queue_requests": 0, 00:20:24.094 "delay_cmd_submit": true, 00:20:24.094 "transport_retry_count": 4, 00:20:24.094 "bdev_retry_count": 3, 00:20:24.094 "transport_ack_timeout": 0, 00:20:24.095 "ctrlr_loss_timeout_sec": 0, 00:20:24.095 "reconnect_delay_sec": 0, 00:20:24.095 "fast_io_fail_timeout_sec": 0, 00:20:24.095 "disable_auto_failback": false, 00:20:24.095 "generate_uuids": false, 00:20:24.095 "transport_tos": 0, 00:20:24.095 "nvme_error_stat": false, 00:20:24.095 "rdma_srq_size": 0, 00:20:24.095 "io_path_stat": false, 00:20:24.095 "allow_accel_sequence": false, 00:20:24.095 "rdma_max_cq_size": 0, 00:20:24.095 "rdma_cm_event_timeout_ms": 0, 00:20:24.095 "dhchap_digests": [ 00:20:24.095 "sha256", 00:20:24.095 "sha384", 00:20:24.095 "sha512" 00:20:24.095 ], 00:20:24.095 "dhchap_dhgroups": [ 00:20:24.095 "null", 00:20:24.095 "ffdhe2048", 00:20:24.095 "ffdhe3072", 00:20:24.095 "ffdhe4096", 00:20:24.095 "ffdhe6144", 00:20:24.095 "ffdhe8192" 00:20:24.095 ] 00:20:24.095 } 00:20:24.095 }, 00:20:24.095 { 00:20:24.095 "method": "bdev_nvme_set_hotplug", 00:20:24.095 "params": { 00:20:24.095 "period_us": 100000, 00:20:24.095 "enable": false 00:20:24.095 } 00:20:24.095 }, 00:20:24.095 { 00:20:24.095 "method": "bdev_malloc_create", 00:20:24.095 "params": { 00:20:24.095 "name": "malloc0", 00:20:24.095 "num_blocks": 8192, 00:20:24.095 "block_size": 4096, 00:20:24.095 "physical_block_size": 4096, 00:20:24.095 "uuid": "48894f2f-c6fd-4cfc-bb86-595ed57dc83a", 00:20:24.095 "optimal_io_boundary": 0, 00:20:24.095 "md_size": 0, 00:20:24.095 "dif_type": 0, 00:20:24.095 "dif_is_head_of_md": false, 00:20:24.095 "dif_pi_format": 0 00:20:24.095 } 00:20:24.095 }, 00:20:24.095 { 00:20:24.095 "method": "bdev_wait_for_examine" 00:20:24.095 } 00:20:24.095 ] 00:20:24.095 }, 00:20:24.095 { 00:20:24.095 "subsystem": "nbd", 00:20:24.095 "config": [] 00:20:24.095 }, 00:20:24.095 { 00:20:24.095 "subsystem": "scheduler", 00:20:24.095 "config": [ 00:20:24.095 { 00:20:24.095 "method": "framework_set_scheduler", 00:20:24.095 "params": { 00:20:24.095 "name": "static" 00:20:24.095 } 00:20:24.095 } 00:20:24.095 ] 00:20:24.095 }, 00:20:24.095 { 00:20:24.095 "subsystem": "nvmf", 00:20:24.095 "config": [ 00:20:24.095 { 00:20:24.095 "method": "nvmf_set_config", 00:20:24.095 "params": { 00:20:24.095 "discovery_filter": "match_any", 00:20:24.095 "admin_cmd_passthru": { 00:20:24.095 "identify_ctrlr": false 00:20:24.095 }, 00:20:24.095 "dhchap_digests": [ 00:20:24.095 "sha256", 00:20:24.095 "sha384", 00:20:24.095 "sha512" 00:20:24.095 ], 00:20:24.095 "dhchap_dhgroups": [ 00:20:24.095 "null", 00:20:24.095 "ffdhe2048", 00:20:24.095 "ffdhe3072", 00:20:24.095 "ffdhe4096", 00:20:24.095 "ffdhe6144", 00:20:24.095 "ffdhe8192" 00:20:24.095 ] 00:20:24.095 } 00:20:24.095 }, 00:20:24.095 { 00:20:24.095 "method": "nvmf_set_max_subsystems", 00:20:24.095 "params": { 00:20:24.095 "max_subsystems": 1024 00:20:24.095 } 00:20:24.095 }, 00:20:24.095 { 00:20:24.095 "method": "nvmf_set_crdt", 00:20:24.095 "params": { 00:20:24.095 "crdt1": 0, 00:20:24.095 "crdt2": 0, 00:20:24.095 "crdt3": 0 00:20:24.095 } 00:20:24.095 }, 00:20:24.095 { 00:20:24.095 "method": "nvmf_create_transport", 00:20:24.095 "params": { 00:20:24.095 "trtype": "TCP", 00:20:24.095 "max_queue_depth": 128, 00:20:24.095 "max_io_qpairs_per_ctrlr": 127, 00:20:24.095 "in_capsule_data_size": 4096, 00:20:24.095 "max_io_size": 131072, 00:20:24.095 "io_unit_size": 131072, 00:20:24.095 "max_aq_depth": 128, 00:20:24.095 "num_shared_buffers": 511, 00:20:24.095 "buf_cache_size": 4294967295, 00:20:24.095 "dif_insert_or_strip": false, 00:20:24.095 "zcopy": false, 00:20:24.095 "c2h_success": false, 00:20:24.095 "sock_priority": 0, 00:20:24.095 "abort_timeout_sec": 1, 00:20:24.095 "ack_timeout": 0, 00:20:24.095 "data_wr_pool_size": 0 00:20:24.095 } 00:20:24.095 }, 00:20:24.095 { 00:20:24.095 "method": "nvmf_create_subsystem", 00:20:24.095 "params": { 00:20:24.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.095 "allow_any_host": false, 00:20:24.095 "serial_number": "SPDK00000000000001", 00:20:24.095 "model_number": "SPDK bdev Controller", 00:20:24.095 "max_namespaces": 10, 00:20:24.095 "min_cntlid": 1, 00:20:24.095 "max_cntlid": 65519, 00:20:24.095 "ana_reporting": false 00:20:24.095 } 00:20:24.095 }, 00:20:24.095 { 00:20:24.095 "method": "nvmf_subsystem_add_host", 00:20:24.095 "params": { 00:20:24.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.095 "host": "nqn.2016-06.io.spdk:host1", 00:20:24.095 "psk": "key0" 00:20:24.095 } 00:20:24.095 }, 00:20:24.095 { 00:20:24.095 "method": "nvmf_subsystem_add_ns", 00:20:24.095 "params": { 00:20:24.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.095 "namespace": { 00:20:24.095 "nsid": 1, 00:20:24.095 "bdev_name": "malloc0", 00:20:24.095 "nguid": "48894F2FC6FD4CFCBB86595ED57DC83A", 00:20:24.095 "uuid": "48894f2f-c6fd-4cfc-bb86-595ed57dc83a", 00:20:24.095 "no_auto_visible": false 00:20:24.095 } 00:20:24.095 } 00:20:24.095 }, 00:20:24.095 { 00:20:24.095 "method": "nvmf_subsystem_add_listener", 00:20:24.095 "params": { 00:20:24.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.095 "listen_address": { 00:20:24.095 "trtype": "TCP", 00:20:24.095 "adrfam": "IPv4", 00:20:24.095 "traddr": "10.0.0.2", 00:20:24.095 "trsvcid": "4420" 00:20:24.095 }, 00:20:24.095 "secure_channel": true 00:20:24.095 } 00:20:24.095 } 00:20:24.095 ] 00:20:24.095 } 00:20:24.095 ] 00:20:24.095 }' 00:20:24.095 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:24.356 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:24.356 "subsystems": [ 00:20:24.356 { 00:20:24.356 "subsystem": "keyring", 00:20:24.356 "config": [ 00:20:24.356 { 00:20:24.356 "method": "keyring_file_add_key", 00:20:24.356 "params": { 00:20:24.356 "name": "key0", 00:20:24.356 "path": "/tmp/tmp.1WkDES6AI2" 00:20:24.356 } 00:20:24.356 } 00:20:24.356 ] 00:20:24.356 }, 00:20:24.356 { 00:20:24.356 "subsystem": "iobuf", 00:20:24.356 "config": [ 00:20:24.356 { 00:20:24.356 "method": "iobuf_set_options", 00:20:24.356 "params": { 00:20:24.356 "small_pool_count": 8192, 00:20:24.356 "large_pool_count": 1024, 00:20:24.356 "small_bufsize": 8192, 00:20:24.356 "large_bufsize": 135168, 00:20:24.356 "enable_numa": false 00:20:24.356 } 00:20:24.356 } 00:20:24.356 ] 00:20:24.356 }, 00:20:24.356 { 00:20:24.356 "subsystem": "sock", 00:20:24.356 "config": [ 00:20:24.356 { 00:20:24.356 "method": "sock_set_default_impl", 00:20:24.356 "params": { 00:20:24.356 "impl_name": "posix" 00:20:24.356 } 00:20:24.356 }, 00:20:24.356 { 00:20:24.356 "method": "sock_impl_set_options", 00:20:24.356 "params": { 00:20:24.356 "impl_name": "ssl", 00:20:24.356 "recv_buf_size": 4096, 00:20:24.356 "send_buf_size": 4096, 00:20:24.357 "enable_recv_pipe": true, 00:20:24.357 "enable_quickack": false, 00:20:24.357 "enable_placement_id": 0, 00:20:24.357 "enable_zerocopy_send_server": true, 00:20:24.357 "enable_zerocopy_send_client": false, 00:20:24.357 "zerocopy_threshold": 0, 00:20:24.357 "tls_version": 0, 00:20:24.357 "enable_ktls": false 00:20:24.357 } 00:20:24.357 }, 00:20:24.357 { 00:20:24.357 "method": "sock_impl_set_options", 00:20:24.357 "params": { 00:20:24.357 "impl_name": "posix", 00:20:24.357 "recv_buf_size": 2097152, 00:20:24.357 "send_buf_size": 2097152, 00:20:24.357 "enable_recv_pipe": true, 00:20:24.357 "enable_quickack": false, 00:20:24.357 "enable_placement_id": 0, 00:20:24.357 "enable_zerocopy_send_server": true, 00:20:24.357 "enable_zerocopy_send_client": false, 00:20:24.357 "zerocopy_threshold": 0, 00:20:24.357 "tls_version": 0, 00:20:24.357 "enable_ktls": false 00:20:24.357 } 00:20:24.357 } 00:20:24.357 ] 00:20:24.357 }, 00:20:24.357 { 00:20:24.357 "subsystem": "vmd", 00:20:24.357 "config": [] 00:20:24.357 }, 00:20:24.357 { 00:20:24.357 "subsystem": "accel", 00:20:24.357 "config": [ 00:20:24.357 { 00:20:24.357 "method": "accel_set_options", 00:20:24.357 "params": { 00:20:24.357 "small_cache_size": 128, 00:20:24.357 "large_cache_size": 16, 00:20:24.357 "task_count": 2048, 00:20:24.357 "sequence_count": 2048, 00:20:24.357 "buf_count": 2048 00:20:24.357 } 00:20:24.357 } 00:20:24.357 ] 00:20:24.357 }, 00:20:24.357 { 00:20:24.357 "subsystem": "bdev", 00:20:24.357 "config": [ 00:20:24.357 { 00:20:24.357 "method": "bdev_set_options", 00:20:24.357 "params": { 00:20:24.357 "bdev_io_pool_size": 65535, 00:20:24.357 "bdev_io_cache_size": 256, 00:20:24.357 "bdev_auto_examine": true, 00:20:24.357 "iobuf_small_cache_size": 128, 00:20:24.357 "iobuf_large_cache_size": 16 00:20:24.357 } 00:20:24.357 }, 00:20:24.357 { 00:20:24.357 "method": "bdev_raid_set_options", 00:20:24.357 "params": { 00:20:24.357 "process_window_size_kb": 1024, 00:20:24.357 "process_max_bandwidth_mb_sec": 0 00:20:24.357 } 00:20:24.357 }, 00:20:24.357 { 00:20:24.357 "method": "bdev_iscsi_set_options", 00:20:24.357 "params": { 00:20:24.357 "timeout_sec": 30 00:20:24.357 } 00:20:24.357 }, 00:20:24.357 { 00:20:24.357 "method": "bdev_nvme_set_options", 00:20:24.357 "params": { 00:20:24.357 "action_on_timeout": "none", 00:20:24.357 "timeout_us": 0, 00:20:24.357 "timeout_admin_us": 0, 00:20:24.357 "keep_alive_timeout_ms": 10000, 00:20:24.357 "arbitration_burst": 0, 00:20:24.357 "low_priority_weight": 0, 00:20:24.357 "medium_priority_weight": 0, 00:20:24.357 "high_priority_weight": 0, 00:20:24.357 "nvme_adminq_poll_period_us": 10000, 00:20:24.357 "nvme_ioq_poll_period_us": 0, 00:20:24.357 "io_queue_requests": 512, 00:20:24.357 "delay_cmd_submit": true, 00:20:24.357 "transport_retry_count": 4, 00:20:24.357 "bdev_retry_count": 3, 00:20:24.357 "transport_ack_timeout": 0, 00:20:24.357 "ctrlr_loss_timeout_sec": 0, 00:20:24.357 "reconnect_delay_sec": 0, 00:20:24.357 "fast_io_fail_timeout_sec": 0, 00:20:24.357 "disable_auto_failback": false, 00:20:24.357 "generate_uuids": false, 00:20:24.357 "transport_tos": 0, 00:20:24.357 "nvme_error_stat": false, 00:20:24.357 "rdma_srq_size": 0, 00:20:24.357 "io_path_stat": false, 00:20:24.357 "allow_accel_sequence": false, 00:20:24.357 "rdma_max_cq_size": 0, 00:20:24.357 "rdma_cm_event_timeout_ms": 0, 00:20:24.357 "dhchap_digests": [ 00:20:24.357 "sha256", 00:20:24.357 "sha384", 00:20:24.357 "sha512" 00:20:24.357 ], 00:20:24.357 "dhchap_dhgroups": [ 00:20:24.357 "null", 00:20:24.357 "ffdhe2048", 00:20:24.357 "ffdhe3072", 00:20:24.357 "ffdhe4096", 00:20:24.357 "ffdhe6144", 00:20:24.357 "ffdhe8192" 00:20:24.357 ] 00:20:24.357 } 00:20:24.357 }, 00:20:24.357 { 00:20:24.357 "method": "bdev_nvme_attach_controller", 00:20:24.357 "params": { 00:20:24.357 "name": "TLSTEST", 00:20:24.357 "trtype": "TCP", 00:20:24.357 "adrfam": "IPv4", 00:20:24.357 "traddr": "10.0.0.2", 00:20:24.357 "trsvcid": "4420", 00:20:24.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.357 "prchk_reftag": false, 00:20:24.357 "prchk_guard": false, 00:20:24.357 "ctrlr_loss_timeout_sec": 0, 00:20:24.357 "reconnect_delay_sec": 0, 00:20:24.357 "fast_io_fail_timeout_sec": 0, 00:20:24.357 "psk": "key0", 00:20:24.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.357 "hdgst": false, 00:20:24.357 "ddgst": false, 00:20:24.357 "multipath": "multipath" 00:20:24.357 } 00:20:24.357 }, 00:20:24.357 { 00:20:24.357 "method": "bdev_nvme_set_hotplug", 00:20:24.357 "params": { 00:20:24.357 "period_us": 100000, 00:20:24.357 "enable": false 00:20:24.357 } 00:20:24.357 }, 00:20:24.357 { 00:20:24.357 "method": "bdev_wait_for_examine" 00:20:24.357 } 00:20:24.357 ] 00:20:24.357 }, 00:20:24.357 { 00:20:24.357 "subsystem": "nbd", 00:20:24.357 "config": [] 00:20:24.357 } 00:20:24.357 ] 00:20:24.357 }' 00:20:24.357 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3539756 00:20:24.357 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3539756 ']' 00:20:24.357 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3539756 00:20:24.357 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:24.357 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:24.357 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3539756 00:20:24.357 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:24.357 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:24.357 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3539756' 00:20:24.357 killing process with pid 3539756 00:20:24.357 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3539756 00:20:24.357 Received shutdown signal, test time was about 10.000000 seconds 00:20:24.357 00:20:24.357 Latency(us) 00:20:24.357 [2024-11-20T06:18:46.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.357 [2024-11-20T06:18:46.635Z] =================================================================================================================== 00:20:24.357 [2024-11-20T06:18:46.635Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:24.357 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3539756 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3539349 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3539349 ']' 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3539349 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3539349 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3539349' 00:20:24.618 killing process with pid 3539349 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3539349 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3539349 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.618 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:24.618 "subsystems": [ 00:20:24.618 { 00:20:24.618 "subsystem": "keyring", 00:20:24.618 "config": [ 00:20:24.618 { 00:20:24.618 "method": "keyring_file_add_key", 00:20:24.618 "params": { 00:20:24.618 "name": "key0", 00:20:24.618 "path": "/tmp/tmp.1WkDES6AI2" 00:20:24.618 } 00:20:24.618 } 00:20:24.618 ] 00:20:24.618 }, 00:20:24.618 { 00:20:24.618 "subsystem": "iobuf", 00:20:24.618 "config": [ 00:20:24.618 { 00:20:24.618 "method": "iobuf_set_options", 00:20:24.618 "params": { 00:20:24.618 "small_pool_count": 8192, 00:20:24.618 "large_pool_count": 1024, 00:20:24.618 "small_bufsize": 8192, 00:20:24.618 "large_bufsize": 135168, 00:20:24.618 "enable_numa": false 00:20:24.618 } 00:20:24.618 } 00:20:24.618 ] 00:20:24.618 }, 00:20:24.618 { 00:20:24.618 "subsystem": "sock", 00:20:24.618 "config": [ 00:20:24.618 { 00:20:24.618 "method": "sock_set_default_impl", 00:20:24.618 "params": { 00:20:24.618 "impl_name": "posix" 00:20:24.618 } 00:20:24.618 }, 00:20:24.618 { 00:20:24.618 "method": "sock_impl_set_options", 00:20:24.618 "params": { 00:20:24.619 "impl_name": "ssl", 00:20:24.619 "recv_buf_size": 4096, 00:20:24.619 "send_buf_size": 4096, 00:20:24.619 "enable_recv_pipe": true, 00:20:24.619 "enable_quickack": false, 00:20:24.619 "enable_placement_id": 0, 00:20:24.619 "enable_zerocopy_send_server": true, 00:20:24.619 "enable_zerocopy_send_client": false, 00:20:24.619 "zerocopy_threshold": 0, 00:20:24.619 "tls_version": 0, 00:20:24.619 "enable_ktls": false 00:20:24.619 } 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "method": "sock_impl_set_options", 00:20:24.619 "params": { 00:20:24.619 "impl_name": "posix", 00:20:24.619 "recv_buf_size": 2097152, 00:20:24.619 "send_buf_size": 2097152, 00:20:24.619 "enable_recv_pipe": true, 00:20:24.619 "enable_quickack": false, 00:20:24.619 "enable_placement_id": 0, 00:20:24.619 "enable_zerocopy_send_server": true, 00:20:24.619 "enable_zerocopy_send_client": false, 00:20:24.619 "zerocopy_threshold": 0, 00:20:24.619 "tls_version": 0, 00:20:24.619 "enable_ktls": false 00:20:24.619 } 00:20:24.619 } 00:20:24.619 ] 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "subsystem": "vmd", 00:20:24.619 "config": [] 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "subsystem": "accel", 00:20:24.619 "config": [ 00:20:24.619 { 00:20:24.619 "method": "accel_set_options", 00:20:24.619 "params": { 00:20:24.619 "small_cache_size": 128, 00:20:24.619 "large_cache_size": 16, 00:20:24.619 "task_count": 2048, 00:20:24.619 "sequence_count": 2048, 00:20:24.619 "buf_count": 2048 00:20:24.619 } 00:20:24.619 } 00:20:24.619 ] 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "subsystem": "bdev", 00:20:24.619 "config": [ 00:20:24.619 { 00:20:24.619 "method": "bdev_set_options", 00:20:24.619 "params": { 00:20:24.619 "bdev_io_pool_size": 65535, 00:20:24.619 "bdev_io_cache_size": 256, 00:20:24.619 "bdev_auto_examine": true, 00:20:24.619 "iobuf_small_cache_size": 128, 00:20:24.619 "iobuf_large_cache_size": 16 00:20:24.619 } 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "method": "bdev_raid_set_options", 00:20:24.619 "params": { 00:20:24.619 "process_window_size_kb": 1024, 00:20:24.619 "process_max_bandwidth_mb_sec": 0 00:20:24.619 } 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "method": "bdev_iscsi_set_options", 00:20:24.619 "params": { 00:20:24.619 "timeout_sec": 30 00:20:24.619 } 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "method": "bdev_nvme_set_options", 00:20:24.619 "params": { 00:20:24.619 "action_on_timeout": "none", 00:20:24.619 "timeout_us": 0, 00:20:24.619 "timeout_admin_us": 0, 00:20:24.619 "keep_alive_timeout_ms": 10000, 00:20:24.619 "arbitration_burst": 0, 00:20:24.619 "low_priority_weight": 0, 00:20:24.619 "medium_priority_weight": 0, 00:20:24.619 "high_priority_weight": 0, 00:20:24.619 "nvme_adminq_poll_period_us": 10000, 00:20:24.619 "nvme_ioq_poll_period_us": 0, 00:20:24.619 "io_queue_requests": 0, 00:20:24.619 "delay_cmd_submit": true, 00:20:24.619 "transport_retry_count": 4, 00:20:24.619 "bdev_retry_count": 3, 00:20:24.619 "transport_ack_timeout": 0, 00:20:24.619 "ctrlr_loss_timeout_sec": 0, 00:20:24.619 "reconnect_delay_sec": 0, 00:20:24.619 "fast_io_fail_timeout_sec": 0, 00:20:24.619 "disable_auto_failback": false, 00:20:24.619 "generate_uuids": false, 00:20:24.619 "transport_tos": 0, 00:20:24.619 "nvme_error_stat": false, 00:20:24.619 "rdma_srq_size": 0, 00:20:24.619 "io_path_stat": false, 00:20:24.619 "allow_accel_sequence": false, 00:20:24.619 "rdma_max_cq_size": 0, 00:20:24.619 "rdma_cm_event_timeout_ms": 0, 00:20:24.619 "dhchap_digests": [ 00:20:24.619 "sha256", 00:20:24.619 "sha384", 00:20:24.619 "sha512" 00:20:24.619 ], 00:20:24.619 "dhchap_dhgroups": [ 00:20:24.619 "null", 00:20:24.619 "ffdhe2048", 00:20:24.619 "ffdhe3072", 00:20:24.619 "ffdhe4096", 00:20:24.619 "ffdhe6144", 00:20:24.619 "ffdhe8192" 00:20:24.619 ] 00:20:24.619 } 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "method": "bdev_nvme_set_hotplug", 00:20:24.619 "params": { 00:20:24.619 "period_us": 100000, 00:20:24.619 "enable": false 00:20:24.619 } 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "method": "bdev_malloc_create", 00:20:24.619 "params": { 00:20:24.619 "name": "malloc0", 00:20:24.619 "num_blocks": 8192, 00:20:24.619 "block_size": 4096, 00:20:24.619 "physical_block_size": 4096, 00:20:24.619 "uuid": "48894f2f-c6fd-4cfc-bb86-595ed57dc83a", 00:20:24.619 "optimal_io_boundary": 0, 00:20:24.619 "md_size": 0, 00:20:24.619 "dif_type": 0, 00:20:24.619 "dif_is_head_of_md": false, 00:20:24.619 "dif_pi_format": 0 00:20:24.619 } 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "method": "bdev_wait_for_examine" 00:20:24.619 } 00:20:24.619 ] 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "subsystem": "nbd", 00:20:24.619 "config": [] 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "subsystem": "scheduler", 00:20:24.619 "config": [ 00:20:24.619 { 00:20:24.619 "method": "framework_set_scheduler", 00:20:24.619 "params": { 00:20:24.619 "name": "static" 00:20:24.619 } 00:20:24.619 } 00:20:24.619 ] 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "subsystem": "nvmf", 00:20:24.619 "config": [ 00:20:24.619 { 00:20:24.619 "method": "nvmf_set_config", 00:20:24.619 "params": { 00:20:24.619 "discovery_filter": "match_any", 00:20:24.619 "admin_cmd_passthru": { 00:20:24.619 "identify_ctrlr": false 00:20:24.619 }, 00:20:24.619 "dhchap_digests": [ 00:20:24.619 "sha256", 00:20:24.619 "sha384", 00:20:24.619 "sha512" 00:20:24.619 ], 00:20:24.619 "dhchap_dhgroups": [ 00:20:24.619 "null", 00:20:24.619 "ffdhe2048", 00:20:24.619 "ffdhe3072", 00:20:24.619 "ffdhe4096", 00:20:24.619 "ffdhe6144", 00:20:24.619 "ffdhe8192" 00:20:24.619 ] 00:20:24.619 } 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "method": "nvmf_set_max_subsystems", 00:20:24.619 "params": { 00:20:24.619 "max_subsystems": 1024 00:20:24.619 } 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "method": "nvmf_set_crdt", 00:20:24.619 "params": { 00:20:24.619 "crdt1": 0, 00:20:24.619 "crdt2": 0, 00:20:24.619 "crdt3": 0 00:20:24.619 } 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "method": "nvmf_create_transport", 00:20:24.619 "params": { 00:20:24.619 "trtype": "TCP", 00:20:24.619 "max_queue_depth": 128, 00:20:24.619 "max_io_qpairs_per_ctrlr": 127, 00:20:24.619 "in_capsule_data_size": 4096, 00:20:24.619 "max_io_size": 131072, 00:20:24.619 "io_unit_size": 131072, 00:20:24.619 "max_aq_depth": 128, 00:20:24.619 "num_shared_buffers": 511, 00:20:24.619 "buf_cache_size": 4294967295, 00:20:24.619 "dif_insert_or_strip": false, 00:20:24.619 "zcopy": false, 00:20:24.619 "c2h_success": false, 00:20:24.619 "sock_priority": 0, 00:20:24.619 "abort_timeout_sec": 1, 00:20:24.619 "ack_timeout": 0, 00:20:24.619 "data_wr_pool_size": 0 00:20:24.619 } 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "method": "nvmf_create_subsystem", 00:20:24.619 "params": { 00:20:24.619 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.619 "allow_any_host": false, 00:20:24.619 "serial_number": "SPDK00000000000001", 00:20:24.619 "model_number": "SPDK bdev Controller", 00:20:24.619 "max_namespaces": 10, 00:20:24.619 "min_cntlid": 1, 00:20:24.619 "max_cntlid": 65519, 00:20:24.619 "ana_reporting": false 00:20:24.619 } 00:20:24.619 }, 00:20:24.619 { 00:20:24.619 "method": "nvmf_subsystem_add_host", 00:20:24.619 "params": { 00:20:24.619 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.619 "host": "nqn.2016-06.io.spdk:host1", 00:20:24.620 "psk": "key0" 00:20:24.620 } 00:20:24.620 }, 00:20:24.620 { 00:20:24.620 "method": "nvmf_subsystem_add_ns", 00:20:24.620 "params": { 00:20:24.620 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.620 "namespace": { 00:20:24.620 "nsid": 1, 00:20:24.620 "bdev_name": "malloc0", 00:20:24.620 "nguid": "48894F2FC6FD4CFCBB86595ED57DC83A", 00:20:24.620 "uuid": "48894f2f-c6fd-4cfc-bb86-595ed57dc83a", 00:20:24.620 "no_auto_visible": false 00:20:24.620 } 00:20:24.620 } 00:20:24.620 }, 00:20:24.620 { 00:20:24.620 "method": "nvmf_subsystem_add_listener", 00:20:24.620 "params": { 00:20:24.620 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.620 "listen_address": { 00:20:24.620 "trtype": "TCP", 00:20:24.620 "adrfam": "IPv4", 00:20:24.620 "traddr": "10.0.0.2", 00:20:24.620 "trsvcid": "4420" 00:20:24.620 }, 00:20:24.620 "secure_channel": true 00:20:24.620 } 00:20:24.620 } 00:20:24.620 ] 00:20:24.620 } 00:20:24.620 ] 00:20:24.620 }' 00:20:24.620 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3540114 00:20:24.620 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3540114 00:20:24.620 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:24.620 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3540114 ']' 00:20:24.620 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.620 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:24.620 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.620 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:24.620 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.880 [2024-11-20 07:18:46.892592] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:24.880 [2024-11-20 07:18:46.892655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.880 [2024-11-20 07:18:46.982587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.880 [2024-11-20 07:18:47.011960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.880 [2024-11-20 07:18:47.011988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.880 [2024-11-20 07:18:47.011994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.880 [2024-11-20 07:18:47.011999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.880 [2024-11-20 07:18:47.012004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.880 [2024-11-20 07:18:47.012450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.140 [2024-11-20 07:18:47.204757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.140 [2024-11-20 07:18:47.236780] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.140 [2024-11-20 07:18:47.236989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.401 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:25.401 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:25.401 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.401 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:25.401 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.662 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.662 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3540408 00:20:25.662 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3540408 /var/tmp/bdevperf.sock 00:20:25.662 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3540408 ']' 00:20:25.662 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.662 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:25.662 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.662 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:25.662 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:25.662 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.662 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:25.662 "subsystems": [ 00:20:25.662 { 00:20:25.662 "subsystem": "keyring", 00:20:25.662 "config": [ 00:20:25.662 { 00:20:25.662 "method": "keyring_file_add_key", 00:20:25.662 "params": { 00:20:25.662 "name": "key0", 00:20:25.662 "path": "/tmp/tmp.1WkDES6AI2" 00:20:25.662 } 00:20:25.662 } 00:20:25.662 ] 00:20:25.662 }, 00:20:25.662 { 00:20:25.662 "subsystem": "iobuf", 00:20:25.662 "config": [ 00:20:25.662 { 00:20:25.662 "method": "iobuf_set_options", 00:20:25.662 "params": { 00:20:25.662 "small_pool_count": 8192, 00:20:25.662 "large_pool_count": 1024, 00:20:25.662 "small_bufsize": 8192, 00:20:25.662 "large_bufsize": 135168, 00:20:25.662 "enable_numa": false 00:20:25.662 } 00:20:25.662 } 00:20:25.662 ] 00:20:25.662 }, 00:20:25.662 { 00:20:25.662 "subsystem": "sock", 00:20:25.662 "config": [ 00:20:25.662 { 00:20:25.662 "method": "sock_set_default_impl", 00:20:25.662 "params": { 00:20:25.662 "impl_name": "posix" 00:20:25.662 } 00:20:25.662 }, 00:20:25.662 { 00:20:25.662 "method": "sock_impl_set_options", 00:20:25.662 "params": { 00:20:25.662 "impl_name": "ssl", 00:20:25.662 "recv_buf_size": 4096, 00:20:25.662 "send_buf_size": 4096, 00:20:25.662 "enable_recv_pipe": true, 00:20:25.662 "enable_quickack": false, 00:20:25.662 "enable_placement_id": 0, 00:20:25.662 "enable_zerocopy_send_server": true, 00:20:25.662 "enable_zerocopy_send_client": false, 00:20:25.662 "zerocopy_threshold": 0, 00:20:25.662 "tls_version": 0, 00:20:25.662 "enable_ktls": false 00:20:25.662 } 00:20:25.662 }, 00:20:25.662 { 00:20:25.662 "method": "sock_impl_set_options", 00:20:25.662 "params": { 00:20:25.662 "impl_name": "posix", 00:20:25.662 "recv_buf_size": 2097152, 00:20:25.662 "send_buf_size": 2097152, 00:20:25.662 "enable_recv_pipe": true, 00:20:25.662 "enable_quickack": false, 00:20:25.662 "enable_placement_id": 0, 00:20:25.662 "enable_zerocopy_send_server": true, 00:20:25.662 "enable_zerocopy_send_client": false, 00:20:25.662 "zerocopy_threshold": 0, 00:20:25.662 "tls_version": 0, 00:20:25.662 "enable_ktls": false 00:20:25.662 } 00:20:25.662 } 00:20:25.662 ] 00:20:25.662 }, 00:20:25.662 { 00:20:25.662 "subsystem": "vmd", 00:20:25.662 "config": [] 00:20:25.662 }, 00:20:25.662 { 00:20:25.662 "subsystem": "accel", 00:20:25.662 "config": [ 00:20:25.662 { 00:20:25.662 "method": "accel_set_options", 00:20:25.662 "params": { 00:20:25.662 "small_cache_size": 128, 00:20:25.662 "large_cache_size": 16, 00:20:25.662 "task_count": 2048, 00:20:25.662 "sequence_count": 2048, 00:20:25.662 "buf_count": 2048 00:20:25.662 } 00:20:25.662 } 00:20:25.662 ] 00:20:25.662 }, 00:20:25.662 { 00:20:25.662 "subsystem": "bdev", 00:20:25.662 "config": [ 00:20:25.662 { 00:20:25.662 "method": "bdev_set_options", 00:20:25.662 "params": { 00:20:25.662 "bdev_io_pool_size": 65535, 00:20:25.662 "bdev_io_cache_size": 256, 00:20:25.662 "bdev_auto_examine": true, 00:20:25.662 "iobuf_small_cache_size": 128, 00:20:25.662 "iobuf_large_cache_size": 16 00:20:25.662 } 00:20:25.662 }, 00:20:25.662 { 00:20:25.662 "method": "bdev_raid_set_options", 00:20:25.662 "params": { 00:20:25.662 "process_window_size_kb": 1024, 00:20:25.662 "process_max_bandwidth_mb_sec": 0 00:20:25.662 } 00:20:25.662 }, 00:20:25.662 { 00:20:25.663 "method": "bdev_iscsi_set_options", 00:20:25.663 "params": { 00:20:25.663 "timeout_sec": 30 00:20:25.663 } 00:20:25.663 }, 00:20:25.663 { 00:20:25.663 "method": "bdev_nvme_set_options", 00:20:25.663 "params": { 00:20:25.663 "action_on_timeout": "none", 00:20:25.663 "timeout_us": 0, 00:20:25.663 "timeout_admin_us": 0, 00:20:25.663 "keep_alive_timeout_ms": 10000, 00:20:25.663 "arbitration_burst": 0, 00:20:25.663 "low_priority_weight": 0, 00:20:25.663 "medium_priority_weight": 0, 00:20:25.663 "high_priority_weight": 0, 00:20:25.663 "nvme_adminq_poll_period_us": 10000, 00:20:25.663 "nvme_ioq_poll_period_us": 0, 00:20:25.663 "io_queue_requests": 512, 00:20:25.663 "delay_cmd_submit": true, 00:20:25.663 "transport_retry_count": 4, 00:20:25.663 "bdev_retry_count": 3, 00:20:25.663 "transport_ack_timeout": 0, 00:20:25.663 "ctrlr_loss_timeout_sec": 0, 00:20:25.663 "reconnect_delay_sec": 0, 00:20:25.663 "fast_io_fail_timeout_sec": 0, 00:20:25.663 "disable_auto_failback": false, 00:20:25.663 "generate_uuids": false, 00:20:25.663 "transport_tos": 0, 00:20:25.663 "nvme_error_stat": false, 00:20:25.663 "rdma_srq_size": 0, 00:20:25.663 "io_path_stat": false, 00:20:25.663 "allow_accel_sequence": false, 00:20:25.663 "rdma_max_cq_size": 0, 00:20:25.663 "rdma_cm_event_timeout_ms": 0, 00:20:25.663 "dhchap_digests": [ 00:20:25.663 "sha256", 00:20:25.663 "sha384", 00:20:25.663 "sha512" 00:20:25.663 ], 00:20:25.663 "dhchap_dhgroups": [ 00:20:25.663 "null", 00:20:25.663 "ffdhe2048", 00:20:25.663 "ffdhe3072", 00:20:25.663 "ffdhe4096", 00:20:25.663 "ffdhe6144", 00:20:25.663 "ffdhe8192" 00:20:25.663 ] 00:20:25.663 } 00:20:25.663 }, 00:20:25.663 { 00:20:25.663 "method": "bdev_nvme_attach_controller", 00:20:25.663 "params": { 00:20:25.663 "name": "TLSTEST", 00:20:25.663 "trtype": "TCP", 00:20:25.663 "adrfam": "IPv4", 00:20:25.663 "traddr": "10.0.0.2", 00:20:25.663 "trsvcid": "4420", 00:20:25.663 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.663 "prchk_reftag": false, 00:20:25.663 "prchk_guard": false, 00:20:25.663 "ctrlr_loss_timeout_sec": 0, 00:20:25.663 "reconnect_delay_sec": 0, 00:20:25.663 "fast_io_fail_timeout_sec": 0, 00:20:25.663 "psk": "key0", 00:20:25.663 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.663 "hdgst": false, 00:20:25.663 "ddgst": false, 00:20:25.663 "multipath": "multipath" 00:20:25.663 } 00:20:25.663 }, 00:20:25.663 { 00:20:25.663 "method": "bdev_nvme_set_hotplug", 00:20:25.663 "params": { 00:20:25.663 "period_us": 100000, 00:20:25.663 "enable": false 00:20:25.663 } 00:20:25.663 }, 00:20:25.663 { 00:20:25.663 "method": "bdev_wait_for_examine" 00:20:25.663 } 00:20:25.663 ] 00:20:25.663 }, 00:20:25.663 { 00:20:25.663 "subsystem": "nbd", 00:20:25.663 "config": [] 00:20:25.663 } 00:20:25.663 ] 00:20:25.663 }' 00:20:25.663 [2024-11-20 07:18:47.767257] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:25.663 [2024-11-20 07:18:47.767313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3540408 ] 00:20:25.663 [2024-11-20 07:18:47.848963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.663 [2024-11-20 07:18:47.878275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.924 [2024-11-20 07:18:48.012191] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.494 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:26.494 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:26.494 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:26.494 Running I/O for 10 seconds... 00:20:28.383 4947.00 IOPS, 19.32 MiB/s [2024-11-20T06:18:52.045Z] 5735.50 IOPS, 22.40 MiB/s [2024-11-20T06:18:53.021Z] 5965.00 IOPS, 23.30 MiB/s [2024-11-20T06:18:53.963Z] 6018.25 IOPS, 23.51 MiB/s [2024-11-20T06:18:54.906Z] 6094.80 IOPS, 23.81 MiB/s [2024-11-20T06:18:55.847Z] 6123.67 IOPS, 23.92 MiB/s [2024-11-20T06:18:56.790Z] 6111.86 IOPS, 23.87 MiB/s [2024-11-20T06:18:57.732Z] 6055.00 IOPS, 23.65 MiB/s [2024-11-20T06:18:59.118Z] 6018.00 IOPS, 23.51 MiB/s [2024-11-20T06:18:59.118Z] 6047.40 IOPS, 23.62 MiB/s 00:20:36.840 Latency(us) 00:20:36.840 [2024-11-20T06:18:59.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.840 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:36.840 Verification LBA range: start 0x0 length 0x2000 00:20:36.840 TLSTESTn1 : 10.02 6047.65 23.62 0.00 0.00 21132.78 4587.52 50025.81 00:20:36.840 [2024-11-20T06:18:59.118Z] =================================================================================================================== 00:20:36.840 [2024-11-20T06:18:59.118Z] Total : 6047.65 23.62 0.00 0.00 21132.78 4587.52 50025.81 00:20:36.840 { 00:20:36.840 "results": [ 00:20:36.840 { 00:20:36.840 "job": "TLSTESTn1", 00:20:36.840 "core_mask": "0x4", 00:20:36.840 "workload": "verify", 00:20:36.840 "status": "finished", 00:20:36.840 "verify_range": { 00:20:36.840 "start": 0, 00:20:36.840 "length": 8192 00:20:36.840 }, 00:20:36.840 "queue_depth": 128, 00:20:36.840 "io_size": 4096, 00:20:36.840 "runtime": 10.020588, 00:20:36.840 "iops": 6047.649100032852, 00:20:36.840 "mibps": 23.62362929700333, 00:20:36.840 "io_failed": 0, 00:20:36.840 "io_timeout": 0, 00:20:36.840 "avg_latency_us": 21132.77762501169, 00:20:36.840 "min_latency_us": 4587.52, 00:20:36.840 "max_latency_us": 50025.81333333333 00:20:36.840 } 00:20:36.840 ], 00:20:36.840 "core_count": 1 00:20:36.840 } 00:20:36.840 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.840 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3540408 00:20:36.840 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3540408 ']' 00:20:36.840 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3540408 00:20:36.840 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:36.840 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.840 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3540408 00:20:36.840 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:36.840 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:36.840 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3540408' 00:20:36.840 killing process with pid 3540408 00:20:36.840 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3540408 00:20:36.840 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.840 00:20:36.840 Latency(us) 00:20:36.840 [2024-11-20T06:18:59.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.840 [2024-11-20T06:18:59.118Z] =================================================================================================================== 00:20:36.840 [2024-11-20T06:18:59.118Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.841 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3540408 00:20:36.841 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3540114 00:20:36.841 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3540114 ']' 00:20:36.841 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3540114 00:20:36.841 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:36.841 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.841 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3540114 00:20:36.841 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:36.841 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:36.841 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3540114' 00:20:36.841 killing process with pid 3540114 00:20:36.841 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3540114 00:20:36.841 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3540114 00:20:36.841 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:36.841 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.841 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:36.841 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.841 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3542483 00:20:36.841 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3542483 00:20:36.841 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:36.841 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3542483 ']' 00:20:36.841 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.841 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:36.841 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.841 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:36.841 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.103 [2024-11-20 07:18:59.119514] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:37.103 [2024-11-20 07:18:59.119568] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.103 [2024-11-20 07:18:59.215109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.103 [2024-11-20 07:18:59.262237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.103 [2024-11-20 07:18:59.262294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.103 [2024-11-20 07:18:59.262303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.103 [2024-11-20 07:18:59.262310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.103 [2024-11-20 07:18:59.262316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.103 [2024-11-20 07:18:59.263059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.676 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:37.676 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:37.676 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:37.677 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.677 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.938 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.938 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.1WkDES6AI2 00:20:37.938 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.1WkDES6AI2 00:20:37.938 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:37.938 [2024-11-20 07:19:00.137588] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.938 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:38.199 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:38.461 [2024-11-20 07:19:00.530562] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:38.461 [2024-11-20 07:19:00.530928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.461 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:38.722 malloc0 00:20:38.722 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:38.722 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.1WkDES6AI2 00:20:38.984 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:39.246 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3542976 00:20:39.247 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:39.247 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:39.247 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3542976 /var/tmp/bdevperf.sock 00:20:39.247 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3542976 ']' 00:20:39.247 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.247 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:39.247 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.247 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:39.247 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.247 [2024-11-20 07:19:01.407847] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:39.247 [2024-11-20 07:19:01.407922] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3542976 ] 00:20:39.247 [2024-11-20 07:19:01.496247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.507 [2024-11-20 07:19:01.531152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.078 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:40.078 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:40.078 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1WkDES6AI2 00:20:40.338 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:40.338 [2024-11-20 07:19:02.549420] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.599 nvme0n1 00:20:40.599 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:40.599 Running I/O for 1 seconds... 00:20:41.542 5756.00 IOPS, 22.48 MiB/s 00:20:41.542 Latency(us) 00:20:41.542 [2024-11-20T06:19:03.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.542 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:41.542 Verification LBA range: start 0x0 length 0x2000 00:20:41.542 nvme0n1 : 1.02 5793.90 22.63 0.00 0.00 21910.55 5898.24 25012.91 00:20:41.542 [2024-11-20T06:19:03.820Z] =================================================================================================================== 00:20:41.542 [2024-11-20T06:19:03.820Z] Total : 5793.90 22.63 0.00 0.00 21910.55 5898.24 25012.91 00:20:41.542 { 00:20:41.542 "results": [ 00:20:41.542 { 00:20:41.542 "job": "nvme0n1", 00:20:41.542 "core_mask": "0x2", 00:20:41.542 "workload": "verify", 00:20:41.542 "status": "finished", 00:20:41.542 "verify_range": { 00:20:41.542 "start": 0, 00:20:41.542 "length": 8192 00:20:41.542 }, 00:20:41.542 "queue_depth": 128, 00:20:41.542 "io_size": 4096, 00:20:41.542 "runtime": 1.015724, 00:20:41.542 "iops": 5793.896767232043, 00:20:41.542 "mibps": 22.63240924700017, 00:20:41.542 "io_failed": 0, 00:20:41.542 "io_timeout": 0, 00:20:41.542 "avg_latency_us": 21910.551490229394, 00:20:41.542 "min_latency_us": 5898.24, 00:20:41.542 "max_latency_us": 25012.906666666666 00:20:41.542 } 00:20:41.542 ], 00:20:41.542 "core_count": 1 00:20:41.542 } 00:20:41.542 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3542976 00:20:41.542 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3542976 ']' 00:20:41.542 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3542976 00:20:41.542 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:41.542 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:41.542 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3542976 00:20:41.804 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:41.804 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:41.804 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3542976' 00:20:41.804 killing process with pid 3542976 00:20:41.804 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3542976 00:20:41.804 Received shutdown signal, test time was about 1.000000 seconds 00:20:41.804 00:20:41.804 Latency(us) 00:20:41.804 [2024-11-20T06:19:04.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.804 [2024-11-20T06:19:04.082Z] =================================================================================================================== 00:20:41.804 [2024-11-20T06:19:04.082Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:41.804 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3542976 00:20:41.804 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3542483 00:20:41.804 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3542483 ']' 00:20:41.804 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3542483 00:20:41.804 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:41.804 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:41.804 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3542483 00:20:41.804 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:41.804 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:41.804 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3542483' 00:20:41.804 killing process with pid 3542483 00:20:41.804 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3542483 00:20:41.804 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3542483 00:20:42.066 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:42.066 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.066 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:42.066 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.066 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3543533 00:20:42.066 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3543533 00:20:42.066 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:42.066 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3543533 ']' 00:20:42.066 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.066 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:42.066 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.066 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:42.066 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.066 [2024-11-20 07:19:04.207294] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:42.066 [2024-11-20 07:19:04.207358] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.066 [2024-11-20 07:19:04.293245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.066 [2024-11-20 07:19:04.333908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.066 [2024-11-20 07:19:04.333951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.066 [2024-11-20 07:19:04.333960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.066 [2024-11-20 07:19:04.333967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.066 [2024-11-20 07:19:04.333973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.066 [2024-11-20 07:19:04.334682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.009 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:43.009 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:43.009 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.009 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.010 [2024-11-20 07:19:05.065702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.010 malloc0 00:20:43.010 [2024-11-20 07:19:05.095790] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.010 [2024-11-20 07:19:05.096146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3543799 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3543799 /var/tmp/bdevperf.sock 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3543799 ']' 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:43.010 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.010 [2024-11-20 07:19:05.179098] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:43.010 [2024-11-20 07:19:05.179171] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3543799 ] 00:20:43.010 [2024-11-20 07:19:05.266086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.271 [2024-11-20 07:19:05.300581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.842 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:43.842 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:43.842 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1WkDES6AI2 00:20:44.103 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:44.103 [2024-11-20 07:19:06.306721] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.364 nvme0n1 00:20:44.364 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:44.364 Running I/O for 1 seconds... 00:20:45.307 4348.00 IOPS, 16.98 MiB/s 00:20:45.307 Latency(us) 00:20:45.307 [2024-11-20T06:19:07.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.307 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:45.307 Verification LBA range: start 0x0 length 0x2000 00:20:45.307 nvme0n1 : 1.01 4413.64 17.24 0.00 0.00 28815.38 4915.20 66409.81 00:20:45.307 [2024-11-20T06:19:07.585Z] =================================================================================================================== 00:20:45.307 [2024-11-20T06:19:07.585Z] Total : 4413.64 17.24 0.00 0.00 28815.38 4915.20 66409.81 00:20:45.307 { 00:20:45.307 "results": [ 00:20:45.307 { 00:20:45.307 "job": "nvme0n1", 00:20:45.307 "core_mask": "0x2", 00:20:45.307 "workload": "verify", 00:20:45.307 "status": "finished", 00:20:45.307 "verify_range": { 00:20:45.307 "start": 0, 00:20:45.307 "length": 8192 00:20:45.307 }, 00:20:45.307 "queue_depth": 128, 00:20:45.307 "io_size": 4096, 00:20:45.307 "runtime": 1.01413, 00:20:45.307 "iops": 4413.635332748267, 00:20:45.307 "mibps": 17.240763018547916, 00:20:45.307 "io_failed": 0, 00:20:45.307 "io_timeout": 0, 00:20:45.307 "avg_latency_us": 28815.381352397973, 00:20:45.307 "min_latency_us": 4915.2, 00:20:45.307 "max_latency_us": 66409.81333333334 00:20:45.307 } 00:20:45.307 ], 00:20:45.307 "core_count": 1 00:20:45.307 } 00:20:45.307 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:45.307 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.307 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.568 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.568 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:45.568 "subsystems": [ 00:20:45.568 { 00:20:45.568 "subsystem": "keyring", 00:20:45.568 "config": [ 00:20:45.568 { 00:20:45.568 "method": "keyring_file_add_key", 00:20:45.568 "params": { 00:20:45.568 "name": "key0", 00:20:45.568 "path": "/tmp/tmp.1WkDES6AI2" 00:20:45.568 } 00:20:45.568 } 00:20:45.568 ] 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "subsystem": "iobuf", 00:20:45.568 "config": [ 00:20:45.568 { 00:20:45.568 "method": "iobuf_set_options", 00:20:45.568 "params": { 00:20:45.568 "small_pool_count": 8192, 00:20:45.568 "large_pool_count": 1024, 00:20:45.568 "small_bufsize": 8192, 00:20:45.568 "large_bufsize": 135168, 00:20:45.568 "enable_numa": false 00:20:45.568 } 00:20:45.568 } 00:20:45.568 ] 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "subsystem": "sock", 00:20:45.568 "config": [ 00:20:45.568 { 00:20:45.568 "method": "sock_set_default_impl", 00:20:45.568 "params": { 00:20:45.568 "impl_name": "posix" 00:20:45.568 } 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "method": "sock_impl_set_options", 00:20:45.568 "params": { 00:20:45.568 "impl_name": "ssl", 00:20:45.568 "recv_buf_size": 4096, 00:20:45.568 "send_buf_size": 4096, 00:20:45.568 "enable_recv_pipe": true, 00:20:45.568 "enable_quickack": false, 00:20:45.568 "enable_placement_id": 0, 00:20:45.568 "enable_zerocopy_send_server": true, 00:20:45.568 "enable_zerocopy_send_client": false, 00:20:45.568 "zerocopy_threshold": 0, 00:20:45.568 "tls_version": 0, 00:20:45.568 "enable_ktls": false 00:20:45.568 } 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "method": "sock_impl_set_options", 00:20:45.568 "params": { 00:20:45.568 "impl_name": "posix", 00:20:45.568 "recv_buf_size": 2097152, 00:20:45.568 "send_buf_size": 2097152, 00:20:45.568 "enable_recv_pipe": true, 00:20:45.568 "enable_quickack": false, 00:20:45.568 "enable_placement_id": 0, 00:20:45.568 "enable_zerocopy_send_server": true, 00:20:45.568 "enable_zerocopy_send_client": false, 00:20:45.568 "zerocopy_threshold": 0, 00:20:45.568 "tls_version": 0, 00:20:45.568 "enable_ktls": false 00:20:45.568 } 00:20:45.568 } 00:20:45.568 ] 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "subsystem": "vmd", 00:20:45.568 "config": [] 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "subsystem": "accel", 00:20:45.568 "config": [ 00:20:45.568 { 00:20:45.568 "method": "accel_set_options", 00:20:45.568 "params": { 00:20:45.568 "small_cache_size": 128, 00:20:45.568 "large_cache_size": 16, 00:20:45.568 "task_count": 2048, 00:20:45.568 "sequence_count": 2048, 00:20:45.568 "buf_count": 2048 00:20:45.568 } 00:20:45.568 } 00:20:45.568 ] 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "subsystem": "bdev", 00:20:45.568 "config": [ 00:20:45.568 { 00:20:45.568 "method": "bdev_set_options", 00:20:45.568 "params": { 00:20:45.568 "bdev_io_pool_size": 65535, 00:20:45.568 "bdev_io_cache_size": 256, 00:20:45.568 "bdev_auto_examine": true, 00:20:45.568 "iobuf_small_cache_size": 128, 00:20:45.568 "iobuf_large_cache_size": 16 00:20:45.568 } 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "method": "bdev_raid_set_options", 00:20:45.568 "params": { 00:20:45.568 "process_window_size_kb": 1024, 00:20:45.568 "process_max_bandwidth_mb_sec": 0 00:20:45.568 } 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "method": "bdev_iscsi_set_options", 00:20:45.568 "params": { 00:20:45.568 "timeout_sec": 30 00:20:45.568 } 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "method": "bdev_nvme_set_options", 00:20:45.568 "params": { 00:20:45.568 "action_on_timeout": "none", 00:20:45.568 "timeout_us": 0, 00:20:45.568 "timeout_admin_us": 0, 00:20:45.568 "keep_alive_timeout_ms": 10000, 00:20:45.568 "arbitration_burst": 0, 00:20:45.568 "low_priority_weight": 0, 00:20:45.568 "medium_priority_weight": 0, 00:20:45.568 "high_priority_weight": 0, 00:20:45.568 "nvme_adminq_poll_period_us": 10000, 00:20:45.568 "nvme_ioq_poll_period_us": 0, 00:20:45.568 "io_queue_requests": 0, 00:20:45.568 "delay_cmd_submit": true, 00:20:45.568 "transport_retry_count": 4, 00:20:45.568 "bdev_retry_count": 3, 00:20:45.568 "transport_ack_timeout": 0, 00:20:45.568 "ctrlr_loss_timeout_sec": 0, 00:20:45.568 "reconnect_delay_sec": 0, 00:20:45.568 "fast_io_fail_timeout_sec": 0, 00:20:45.568 "disable_auto_failback": false, 00:20:45.568 "generate_uuids": false, 00:20:45.568 "transport_tos": 0, 00:20:45.568 "nvme_error_stat": false, 00:20:45.568 "rdma_srq_size": 0, 00:20:45.568 "io_path_stat": false, 00:20:45.568 "allow_accel_sequence": false, 00:20:45.568 "rdma_max_cq_size": 0, 00:20:45.568 "rdma_cm_event_timeout_ms": 0, 00:20:45.568 "dhchap_digests": [ 00:20:45.568 "sha256", 00:20:45.568 "sha384", 00:20:45.568 "sha512" 00:20:45.568 ], 00:20:45.568 "dhchap_dhgroups": [ 00:20:45.568 "null", 00:20:45.568 "ffdhe2048", 00:20:45.568 "ffdhe3072", 00:20:45.568 "ffdhe4096", 00:20:45.568 "ffdhe6144", 00:20:45.568 "ffdhe8192" 00:20:45.568 ] 00:20:45.568 } 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "method": "bdev_nvme_set_hotplug", 00:20:45.568 "params": { 00:20:45.568 "period_us": 100000, 00:20:45.568 "enable": false 00:20:45.568 } 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "method": "bdev_malloc_create", 00:20:45.568 "params": { 00:20:45.568 "name": "malloc0", 00:20:45.568 "num_blocks": 8192, 00:20:45.568 "block_size": 4096, 00:20:45.568 "physical_block_size": 4096, 00:20:45.568 "uuid": "3b6a5168-8792-4f5d-afb8-9c4d74dd2db4", 00:20:45.568 "optimal_io_boundary": 0, 00:20:45.568 "md_size": 0, 00:20:45.568 "dif_type": 0, 00:20:45.568 "dif_is_head_of_md": false, 00:20:45.568 "dif_pi_format": 0 00:20:45.568 } 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "method": "bdev_wait_for_examine" 00:20:45.568 } 00:20:45.568 ] 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "subsystem": "nbd", 00:20:45.568 "config": [] 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "subsystem": "scheduler", 00:20:45.568 "config": [ 00:20:45.568 { 00:20:45.568 "method": "framework_set_scheduler", 00:20:45.568 "params": { 00:20:45.568 "name": "static" 00:20:45.568 } 00:20:45.568 } 00:20:45.568 ] 00:20:45.568 }, 00:20:45.568 { 00:20:45.568 "subsystem": "nvmf", 00:20:45.568 "config": [ 00:20:45.568 { 00:20:45.568 "method": "nvmf_set_config", 00:20:45.568 "params": { 00:20:45.568 "discovery_filter": "match_any", 00:20:45.568 "admin_cmd_passthru": { 00:20:45.568 "identify_ctrlr": false 00:20:45.568 }, 00:20:45.568 "dhchap_digests": [ 00:20:45.568 "sha256", 00:20:45.568 "sha384", 00:20:45.568 "sha512" 00:20:45.569 ], 00:20:45.569 "dhchap_dhgroups": [ 00:20:45.569 "null", 00:20:45.569 "ffdhe2048", 00:20:45.569 "ffdhe3072", 00:20:45.569 "ffdhe4096", 00:20:45.569 "ffdhe6144", 00:20:45.569 "ffdhe8192" 00:20:45.569 ] 00:20:45.569 } 00:20:45.569 }, 00:20:45.569 { 00:20:45.569 "method": "nvmf_set_max_subsystems", 00:20:45.569 "params": { 00:20:45.569 "max_subsystems": 1024 00:20:45.569 } 00:20:45.569 }, 00:20:45.569 { 00:20:45.569 "method": "nvmf_set_crdt", 00:20:45.569 "params": { 00:20:45.569 "crdt1": 0, 00:20:45.569 "crdt2": 0, 00:20:45.569 "crdt3": 0 00:20:45.569 } 00:20:45.569 }, 00:20:45.569 { 00:20:45.569 "method": "nvmf_create_transport", 00:20:45.569 "params": { 00:20:45.569 "trtype": "TCP", 00:20:45.569 "max_queue_depth": 128, 00:20:45.569 "max_io_qpairs_per_ctrlr": 127, 00:20:45.569 "in_capsule_data_size": 4096, 00:20:45.569 "max_io_size": 131072, 00:20:45.569 "io_unit_size": 131072, 00:20:45.569 "max_aq_depth": 128, 00:20:45.569 "num_shared_buffers": 511, 00:20:45.569 "buf_cache_size": 4294967295, 00:20:45.569 "dif_insert_or_strip": false, 00:20:45.569 "zcopy": false, 00:20:45.569 "c2h_success": false, 00:20:45.569 "sock_priority": 0, 00:20:45.569 "abort_timeout_sec": 1, 00:20:45.569 "ack_timeout": 0, 00:20:45.569 "data_wr_pool_size": 0 00:20:45.569 } 00:20:45.569 }, 00:20:45.569 { 00:20:45.569 "method": "nvmf_create_subsystem", 00:20:45.569 "params": { 00:20:45.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.569 "allow_any_host": false, 00:20:45.569 "serial_number": "00000000000000000000", 00:20:45.569 "model_number": "SPDK bdev Controller", 00:20:45.569 "max_namespaces": 32, 00:20:45.569 "min_cntlid": 1, 00:20:45.569 "max_cntlid": 65519, 00:20:45.569 "ana_reporting": false 00:20:45.569 } 00:20:45.569 }, 00:20:45.569 { 00:20:45.569 "method": "nvmf_subsystem_add_host", 00:20:45.569 "params": { 00:20:45.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.569 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.569 "psk": "key0" 00:20:45.569 } 00:20:45.569 }, 00:20:45.569 { 00:20:45.569 "method": "nvmf_subsystem_add_ns", 00:20:45.569 "params": { 00:20:45.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.569 "namespace": { 00:20:45.569 "nsid": 1, 00:20:45.569 "bdev_name": "malloc0", 00:20:45.569 "nguid": "3B6A516887924F5DAFB89C4D74DD2DB4", 00:20:45.569 "uuid": "3b6a5168-8792-4f5d-afb8-9c4d74dd2db4", 00:20:45.569 "no_auto_visible": false 00:20:45.569 } 00:20:45.569 } 00:20:45.569 }, 00:20:45.569 { 00:20:45.569 "method": "nvmf_subsystem_add_listener", 00:20:45.569 "params": { 00:20:45.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.569 "listen_address": { 00:20:45.569 "trtype": "TCP", 00:20:45.569 "adrfam": "IPv4", 00:20:45.569 "traddr": "10.0.0.2", 00:20:45.569 "trsvcid": "4420" 00:20:45.569 }, 00:20:45.569 "secure_channel": false, 00:20:45.569 "sock_impl": "ssl" 00:20:45.569 } 00:20:45.569 } 00:20:45.569 ] 00:20:45.569 } 00:20:45.569 ] 00:20:45.569 }' 00:20:45.569 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:45.830 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:45.830 "subsystems": [ 00:20:45.830 { 00:20:45.830 "subsystem": "keyring", 00:20:45.830 "config": [ 00:20:45.830 { 00:20:45.830 "method": "keyring_file_add_key", 00:20:45.830 "params": { 00:20:45.830 "name": "key0", 00:20:45.830 "path": "/tmp/tmp.1WkDES6AI2" 00:20:45.830 } 00:20:45.830 } 00:20:45.830 ] 00:20:45.830 }, 00:20:45.830 { 00:20:45.830 "subsystem": "iobuf", 00:20:45.830 "config": [ 00:20:45.830 { 00:20:45.830 "method": "iobuf_set_options", 00:20:45.830 "params": { 00:20:45.830 "small_pool_count": 8192, 00:20:45.830 "large_pool_count": 1024, 00:20:45.830 "small_bufsize": 8192, 00:20:45.830 "large_bufsize": 135168, 00:20:45.830 "enable_numa": false 00:20:45.830 } 00:20:45.830 } 00:20:45.830 ] 00:20:45.830 }, 00:20:45.830 { 00:20:45.830 "subsystem": "sock", 00:20:45.830 "config": [ 00:20:45.830 { 00:20:45.830 "method": "sock_set_default_impl", 00:20:45.830 "params": { 00:20:45.830 "impl_name": "posix" 00:20:45.830 } 00:20:45.830 }, 00:20:45.830 { 00:20:45.830 "method": "sock_impl_set_options", 00:20:45.830 "params": { 00:20:45.830 "impl_name": "ssl", 00:20:45.830 "recv_buf_size": 4096, 00:20:45.830 "send_buf_size": 4096, 00:20:45.830 "enable_recv_pipe": true, 00:20:45.830 "enable_quickack": false, 00:20:45.830 "enable_placement_id": 0, 00:20:45.830 "enable_zerocopy_send_server": true, 00:20:45.830 "enable_zerocopy_send_client": false, 00:20:45.830 "zerocopy_threshold": 0, 00:20:45.830 "tls_version": 0, 00:20:45.830 "enable_ktls": false 00:20:45.830 } 00:20:45.830 }, 00:20:45.830 { 00:20:45.830 "method": "sock_impl_set_options", 00:20:45.830 "params": { 00:20:45.830 "impl_name": "posix", 00:20:45.830 "recv_buf_size": 2097152, 00:20:45.830 "send_buf_size": 2097152, 00:20:45.830 "enable_recv_pipe": true, 00:20:45.830 "enable_quickack": false, 00:20:45.830 "enable_placement_id": 0, 00:20:45.830 "enable_zerocopy_send_server": true, 00:20:45.830 "enable_zerocopy_send_client": false, 00:20:45.830 "zerocopy_threshold": 0, 00:20:45.830 "tls_version": 0, 00:20:45.830 "enable_ktls": false 00:20:45.830 } 00:20:45.830 } 00:20:45.830 ] 00:20:45.830 }, 00:20:45.830 { 00:20:45.830 "subsystem": "vmd", 00:20:45.830 "config": [] 00:20:45.830 }, 00:20:45.830 { 00:20:45.830 "subsystem": "accel", 00:20:45.830 "config": [ 00:20:45.830 { 00:20:45.830 "method": "accel_set_options", 00:20:45.830 "params": { 00:20:45.830 "small_cache_size": 128, 00:20:45.830 "large_cache_size": 16, 00:20:45.830 "task_count": 2048, 00:20:45.830 "sequence_count": 2048, 00:20:45.830 "buf_count": 2048 00:20:45.830 } 00:20:45.830 } 00:20:45.830 ] 00:20:45.830 }, 00:20:45.830 { 00:20:45.830 "subsystem": "bdev", 00:20:45.830 "config": [ 00:20:45.830 { 00:20:45.830 "method": "bdev_set_options", 00:20:45.830 "params": { 00:20:45.830 "bdev_io_pool_size": 65535, 00:20:45.830 "bdev_io_cache_size": 256, 00:20:45.830 "bdev_auto_examine": true, 00:20:45.830 "iobuf_small_cache_size": 128, 00:20:45.830 "iobuf_large_cache_size": 16 00:20:45.830 } 00:20:45.830 }, 00:20:45.830 { 00:20:45.830 "method": "bdev_raid_set_options", 00:20:45.830 "params": { 00:20:45.830 "process_window_size_kb": 1024, 00:20:45.830 "process_max_bandwidth_mb_sec": 0 00:20:45.830 } 00:20:45.830 }, 00:20:45.830 { 00:20:45.830 "method": "bdev_iscsi_set_options", 00:20:45.830 "params": { 00:20:45.830 "timeout_sec": 30 00:20:45.831 } 00:20:45.831 }, 00:20:45.831 { 00:20:45.831 "method": "bdev_nvme_set_options", 00:20:45.831 "params": { 00:20:45.831 "action_on_timeout": "none", 00:20:45.831 "timeout_us": 0, 00:20:45.831 "timeout_admin_us": 0, 00:20:45.831 "keep_alive_timeout_ms": 10000, 00:20:45.831 "arbitration_burst": 0, 00:20:45.831 "low_priority_weight": 0, 00:20:45.831 "medium_priority_weight": 0, 00:20:45.831 "high_priority_weight": 0, 00:20:45.831 "nvme_adminq_poll_period_us": 10000, 00:20:45.831 "nvme_ioq_poll_period_us": 0, 00:20:45.831 "io_queue_requests": 512, 00:20:45.831 "delay_cmd_submit": true, 00:20:45.831 "transport_retry_count": 4, 00:20:45.831 "bdev_retry_count": 3, 00:20:45.831 "transport_ack_timeout": 0, 00:20:45.831 "ctrlr_loss_timeout_sec": 0, 00:20:45.831 "reconnect_delay_sec": 0, 00:20:45.831 "fast_io_fail_timeout_sec": 0, 00:20:45.831 "disable_auto_failback": false, 00:20:45.831 "generate_uuids": false, 00:20:45.831 "transport_tos": 0, 00:20:45.831 "nvme_error_stat": false, 00:20:45.831 "rdma_srq_size": 0, 00:20:45.831 "io_path_stat": false, 00:20:45.831 "allow_accel_sequence": false, 00:20:45.831 "rdma_max_cq_size": 0, 00:20:45.831 "rdma_cm_event_timeout_ms": 0, 00:20:45.831 "dhchap_digests": [ 00:20:45.831 "sha256", 00:20:45.831 "sha384", 00:20:45.831 "sha512" 00:20:45.831 ], 00:20:45.831 "dhchap_dhgroups": [ 00:20:45.831 "null", 00:20:45.831 "ffdhe2048", 00:20:45.831 "ffdhe3072", 00:20:45.831 "ffdhe4096", 00:20:45.831 "ffdhe6144", 00:20:45.831 "ffdhe8192" 00:20:45.831 ] 00:20:45.831 } 00:20:45.831 }, 00:20:45.831 { 00:20:45.831 "method": "bdev_nvme_attach_controller", 00:20:45.831 "params": { 00:20:45.831 "name": "nvme0", 00:20:45.831 "trtype": "TCP", 00:20:45.831 "adrfam": "IPv4", 00:20:45.831 "traddr": "10.0.0.2", 00:20:45.831 "trsvcid": "4420", 00:20:45.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.831 "prchk_reftag": false, 00:20:45.831 "prchk_guard": false, 00:20:45.831 "ctrlr_loss_timeout_sec": 0, 00:20:45.831 "reconnect_delay_sec": 0, 00:20:45.831 "fast_io_fail_timeout_sec": 0, 00:20:45.831 "psk": "key0", 00:20:45.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.831 "hdgst": false, 00:20:45.831 "ddgst": false, 00:20:45.831 "multipath": "multipath" 00:20:45.831 } 00:20:45.831 }, 00:20:45.831 { 00:20:45.831 "method": "bdev_nvme_set_hotplug", 00:20:45.831 "params": { 00:20:45.831 "period_us": 100000, 00:20:45.831 "enable": false 00:20:45.831 } 00:20:45.831 }, 00:20:45.831 { 00:20:45.831 "method": "bdev_enable_histogram", 00:20:45.831 "params": { 00:20:45.831 "name": "nvme0n1", 00:20:45.831 "enable": true 00:20:45.831 } 00:20:45.831 }, 00:20:45.831 { 00:20:45.831 "method": "bdev_wait_for_examine" 00:20:45.831 } 00:20:45.831 ] 00:20:45.831 }, 00:20:45.831 { 00:20:45.831 "subsystem": "nbd", 00:20:45.831 "config": [] 00:20:45.831 } 00:20:45.831 ] 00:20:45.831 }' 00:20:45.831 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3543799 00:20:45.831 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3543799 ']' 00:20:45.831 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3543799 00:20:45.831 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:45.831 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:45.831 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3543799 00:20:45.831 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:45.831 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:45.831 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3543799' 00:20:45.831 killing process with pid 3543799 00:20:45.831 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3543799 00:20:45.831 Received shutdown signal, test time was about 1.000000 seconds 00:20:45.831 00:20:45.831 Latency(us) 00:20:45.831 [2024-11-20T06:19:08.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.831 [2024-11-20T06:19:08.109Z] =================================================================================================================== 00:20:45.831 [2024-11-20T06:19:08.109Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.831 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3543799 00:20:45.831 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3543533 00:20:45.831 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3543533 ']' 00:20:45.831 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3543533 00:20:45.831 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:45.831 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:45.831 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3543533 00:20:46.093 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:46.093 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:46.093 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3543533' 00:20:46.093 killing process with pid 3543533 00:20:46.093 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3543533 00:20:46.093 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3543533 00:20:46.093 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:46.093 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:46.093 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:46.093 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.093 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:46.093 "subsystems": [ 00:20:46.093 { 00:20:46.093 "subsystem": "keyring", 00:20:46.093 "config": [ 00:20:46.093 { 00:20:46.093 "method": "keyring_file_add_key", 00:20:46.093 "params": { 00:20:46.093 "name": "key0", 00:20:46.093 "path": "/tmp/tmp.1WkDES6AI2" 00:20:46.093 } 00:20:46.093 } 00:20:46.093 ] 00:20:46.093 }, 00:20:46.093 { 00:20:46.093 "subsystem": "iobuf", 00:20:46.093 "config": [ 00:20:46.093 { 00:20:46.093 "method": "iobuf_set_options", 00:20:46.093 "params": { 00:20:46.093 "small_pool_count": 8192, 00:20:46.093 "large_pool_count": 1024, 00:20:46.093 "small_bufsize": 8192, 00:20:46.093 "large_bufsize": 135168, 00:20:46.093 "enable_numa": false 00:20:46.093 } 00:20:46.093 } 00:20:46.093 ] 00:20:46.093 }, 00:20:46.093 { 00:20:46.093 "subsystem": "sock", 00:20:46.093 "config": [ 00:20:46.093 { 00:20:46.093 "method": "sock_set_default_impl", 00:20:46.093 "params": { 00:20:46.093 "impl_name": "posix" 00:20:46.093 } 00:20:46.093 }, 00:20:46.093 { 00:20:46.093 "method": "sock_impl_set_options", 00:20:46.093 "params": { 00:20:46.093 "impl_name": "ssl", 00:20:46.093 "recv_buf_size": 4096, 00:20:46.093 "send_buf_size": 4096, 00:20:46.093 "enable_recv_pipe": true, 00:20:46.093 "enable_quickack": false, 00:20:46.093 "enable_placement_id": 0, 00:20:46.093 "enable_zerocopy_send_server": true, 00:20:46.093 "enable_zerocopy_send_client": false, 00:20:46.093 "zerocopy_threshold": 0, 00:20:46.093 "tls_version": 0, 00:20:46.093 "enable_ktls": false 00:20:46.093 } 00:20:46.093 }, 00:20:46.093 { 00:20:46.093 "method": "sock_impl_set_options", 00:20:46.093 "params": { 00:20:46.093 "impl_name": "posix", 00:20:46.093 "recv_buf_size": 2097152, 00:20:46.093 "send_buf_size": 2097152, 00:20:46.093 "enable_recv_pipe": true, 00:20:46.093 "enable_quickack": false, 00:20:46.093 "enable_placement_id": 0, 00:20:46.093 "enable_zerocopy_send_server": true, 00:20:46.093 "enable_zerocopy_send_client": false, 00:20:46.093 "zerocopy_threshold": 0, 00:20:46.093 "tls_version": 0, 00:20:46.093 "enable_ktls": false 00:20:46.093 } 00:20:46.093 } 00:20:46.093 ] 00:20:46.093 }, 00:20:46.093 { 00:20:46.093 "subsystem": "vmd", 00:20:46.093 "config": [] 00:20:46.093 }, 00:20:46.093 { 00:20:46.093 "subsystem": "accel", 00:20:46.093 "config": [ 00:20:46.093 { 00:20:46.093 "method": "accel_set_options", 00:20:46.093 "params": { 00:20:46.093 "small_cache_size": 128, 00:20:46.093 "large_cache_size": 16, 00:20:46.093 "task_count": 2048, 00:20:46.093 "sequence_count": 2048, 00:20:46.093 "buf_count": 2048 00:20:46.093 } 00:20:46.093 } 00:20:46.093 ] 00:20:46.093 }, 00:20:46.093 { 00:20:46.093 "subsystem": "bdev", 00:20:46.093 "config": [ 00:20:46.093 { 00:20:46.093 "method": "bdev_set_options", 00:20:46.093 "params": { 00:20:46.093 "bdev_io_pool_size": 65535, 00:20:46.093 "bdev_io_cache_size": 256, 00:20:46.093 "bdev_auto_examine": true, 00:20:46.093 "iobuf_small_cache_size": 128, 00:20:46.093 "iobuf_large_cache_size": 16 00:20:46.093 } 00:20:46.093 }, 00:20:46.093 { 00:20:46.093 "method": "bdev_raid_set_options", 00:20:46.093 "params": { 00:20:46.093 "process_window_size_kb": 1024, 00:20:46.093 "process_max_bandwidth_mb_sec": 0 00:20:46.093 } 00:20:46.093 }, 00:20:46.093 { 00:20:46.093 "method": "bdev_iscsi_set_options", 00:20:46.093 "params": { 00:20:46.093 "timeout_sec": 30 00:20:46.093 } 00:20:46.093 }, 00:20:46.093 { 00:20:46.093 "method": "bdev_nvme_set_options", 00:20:46.093 "params": { 00:20:46.093 "action_on_timeout": "none", 00:20:46.093 "timeout_us": 0, 00:20:46.093 "timeout_admin_us": 0, 00:20:46.093 "keep_alive_timeout_ms": 10000, 00:20:46.093 "arbitration_burst": 0, 00:20:46.093 "low_priority_weight": 0, 00:20:46.093 "medium_priority_weight": 0, 00:20:46.093 "high_priority_weight": 0, 00:20:46.093 "nvme_adminq_poll_period_us": 10000, 00:20:46.093 "nvme_ioq_poll_period_us": 0, 00:20:46.093 "io_queue_requests": 0, 00:20:46.093 "delay_cmd_submit": true, 00:20:46.093 "transport_retry_count": 4, 00:20:46.093 "bdev_retry_count": 3, 00:20:46.093 "transport_ack_timeout": 0, 00:20:46.093 "ctrlr_loss_timeout_sec": 0, 00:20:46.093 "reconnect_delay_sec": 0, 00:20:46.093 "fast_io_fail_timeout_sec": 0, 00:20:46.093 "disable_auto_failback": false, 00:20:46.093 "generate_uuids": false, 00:20:46.093 "transport_tos": 0, 00:20:46.093 "nvme_error_stat": false, 00:20:46.093 "rdma_srq_size": 0, 00:20:46.093 "io_path_stat": false, 00:20:46.093 "allow_accel_sequence": false, 00:20:46.093 "rdma_max_cq_size": 0, 00:20:46.093 "rdma_cm_event_timeout_ms": 0, 00:20:46.093 "dhchap_digests": [ 00:20:46.093 "sha256", 00:20:46.093 "sha384", 00:20:46.093 "sha512" 00:20:46.094 ], 00:20:46.094 "dhchap_dhgroups": [ 00:20:46.094 "null", 00:20:46.094 "ffdhe2048", 00:20:46.094 "ffdhe3072", 00:20:46.094 "ffdhe4096", 00:20:46.094 "ffdhe6144", 00:20:46.094 "ffdhe8192" 00:20:46.094 ] 00:20:46.094 } 00:20:46.094 }, 00:20:46.094 { 00:20:46.094 "method": "bdev_nvme_set_hotplug", 00:20:46.094 "params": { 00:20:46.094 "period_us": 100000, 00:20:46.094 "enable": false 00:20:46.094 } 00:20:46.094 }, 00:20:46.094 { 00:20:46.094 "method": "bdev_malloc_create", 00:20:46.094 "params": { 00:20:46.094 "name": "malloc0", 00:20:46.094 "num_blocks": 8192, 00:20:46.094 "block_size": 4096, 00:20:46.094 "physical_block_size": 4096, 00:20:46.094 "uuid": "3b6a5168-8792-4f5d-afb8-9c4d74dd2db4", 00:20:46.094 "optimal_io_boundary": 0, 00:20:46.094 "md_size": 0, 00:20:46.094 "dif_type": 0, 00:20:46.094 "dif_is_head_of_md": false, 00:20:46.094 "dif_pi_format": 0 00:20:46.094 } 00:20:46.094 }, 00:20:46.094 { 00:20:46.094 "method": "bdev_wait_for_examine" 00:20:46.094 } 00:20:46.094 ] 00:20:46.094 }, 00:20:46.094 { 00:20:46.094 "subsystem": "nbd", 00:20:46.094 "config": [] 00:20:46.094 }, 00:20:46.094 { 00:20:46.094 "subsystem": "scheduler", 00:20:46.094 "config": [ 00:20:46.094 { 00:20:46.094 "method": "framework_set_scheduler", 00:20:46.094 "params": { 00:20:46.094 "name": "static" 00:20:46.094 } 00:20:46.094 } 00:20:46.094 ] 00:20:46.094 }, 00:20:46.094 { 00:20:46.094 "subsystem": "nvmf", 00:20:46.094 "config": [ 00:20:46.094 { 00:20:46.094 "method": "nvmf_set_config", 00:20:46.094 "params": { 00:20:46.094 "discovery_filter": "match_any", 00:20:46.094 "admin_cmd_passthru": { 00:20:46.094 "identify_ctrlr": false 00:20:46.094 }, 00:20:46.094 "dhchap_digests": [ 00:20:46.094 "sha256", 00:20:46.094 "sha384", 00:20:46.094 "sha512" 00:20:46.094 ], 00:20:46.094 "dhchap_dhgroups": [ 00:20:46.094 "null", 00:20:46.094 "ffdhe2048", 00:20:46.094 "ffdhe3072", 00:20:46.094 "ffdhe4096", 00:20:46.094 "ffdhe6144", 00:20:46.094 "ffdhe8192" 00:20:46.094 ] 00:20:46.094 } 00:20:46.094 }, 00:20:46.094 { 00:20:46.094 "method": "nvmf_set_max_subsystems", 00:20:46.094 "params": { 00:20:46.094 "max_subsystems": 1024 00:20:46.094 } 00:20:46.094 }, 00:20:46.094 { 00:20:46.094 "method": "nvmf_set_crdt", 00:20:46.094 "params": { 00:20:46.094 "crdt1": 0, 00:20:46.094 "crdt2": 0, 00:20:46.094 "crdt3": 0 00:20:46.094 } 00:20:46.094 }, 00:20:46.094 { 00:20:46.094 "method": "nvmf_create_transport", 00:20:46.094 "params": { 00:20:46.094 "trtype": "TCP", 00:20:46.094 "max_queue_depth": 128, 00:20:46.094 "max_io_qpairs_per_ctrlr": 127, 00:20:46.094 "in_capsule_data_size": 4096, 00:20:46.094 "max_io_size": 131072, 00:20:46.094 "io_unit_size": 131072, 00:20:46.094 "max_aq_depth": 128, 00:20:46.094 "num_shared_buffers": 511, 00:20:46.094 "buf_cache_size": 4294967295, 00:20:46.094 "dif_insert_or_strip": false, 00:20:46.094 "zcopy": false, 00:20:46.094 "c2h_success": false, 00:20:46.094 "sock_priority": 0, 00:20:46.094 "abort_timeout_sec": 1, 00:20:46.094 "ack_timeout": 0, 00:20:46.094 "data_wr_pool_size": 0 00:20:46.094 } 00:20:46.094 }, 00:20:46.094 { 00:20:46.094 "method": "nvmf_create_subsystem", 00:20:46.094 "params": { 00:20:46.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.094 "allow_any_host": false, 00:20:46.094 "serial_number": "00000000000000000000", 00:20:46.094 "model_number": "SPDK bdev Controller", 00:20:46.094 "max_namespaces": 32, 00:20:46.094 "min_cntlid": 1, 00:20:46.094 "max_cntlid": 65519, 00:20:46.094 "ana_reporting": false 00:20:46.094 } 00:20:46.094 }, 00:20:46.094 { 00:20:46.094 "method": "nvmf_subsystem_add_host", 00:20:46.094 "params": { 00:20:46.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.094 "host": "nqn.2016-06.io.spdk:host1", 00:20:46.094 "psk": "key0" 00:20:46.094 } 00:20:46.094 }, 00:20:46.094 { 00:20:46.094 "method": "nvmf_subsystem_add_ns", 00:20:46.094 "params": { 00:20:46.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.094 "namespace": { 00:20:46.094 "nsid": 1, 00:20:46.094 "bdev_name": "malloc0", 00:20:46.094 "nguid": "3B6A516887924F5DAFB89C4D74DD2DB4", 00:20:46.094 "uuid": "3b6a5168-8792-4f5d-afb8-9c4d74dd2db4", 00:20:46.094 "no_auto_visible": false 00:20:46.094 } 00:20:46.094 } 00:20:46.094 }, 00:20:46.094 { 00:20:46.094 "method": "nvmf_subsystem_add_listener", 00:20:46.094 "params": { 00:20:46.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.094 "listen_address": { 00:20:46.094 "trtype": "TCP", 00:20:46.094 "adrfam": "IPv4", 00:20:46.094 "traddr": "10.0.0.2", 00:20:46.094 "trsvcid": "4420" 00:20:46.094 }, 00:20:46.094 "secure_channel": false, 00:20:46.094 "sock_impl": "ssl" 00:20:46.094 } 00:20:46.094 } 00:20:46.094 ] 00:20:46.094 } 00:20:46.094 ] 00:20:46.094 }' 00:20:46.094 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3544313 00:20:46.094 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3544313 00:20:46.094 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:46.094 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3544313 ']' 00:20:46.094 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.094 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:46.094 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.094 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:46.094 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.094 [2024-11-20 07:19:08.303177] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:46.094 [2024-11-20 07:19:08.303232] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.379 [2024-11-20 07:19:08.393255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.379 [2024-11-20 07:19:08.429799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.379 [2024-11-20 07:19:08.429844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.379 [2024-11-20 07:19:08.429850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.379 [2024-11-20 07:19:08.429856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.379 [2024-11-20 07:19:08.429861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.379 [2024-11-20 07:19:08.430486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.379 [2024-11-20 07:19:08.623936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.654 [2024-11-20 07:19:08.655973] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:46.654 [2024-11-20 07:19:08.656189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3544600 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3544600 /var/tmp/bdevperf.sock 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3544600 ']' 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.943 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:46.943 "subsystems": [ 00:20:46.943 { 00:20:46.943 "subsystem": "keyring", 00:20:46.943 "config": [ 00:20:46.943 { 00:20:46.943 "method": "keyring_file_add_key", 00:20:46.943 "params": { 00:20:46.943 "name": "key0", 00:20:46.943 "path": "/tmp/tmp.1WkDES6AI2" 00:20:46.943 } 00:20:46.943 } 00:20:46.943 ] 00:20:46.943 }, 00:20:46.943 { 00:20:46.943 "subsystem": "iobuf", 00:20:46.943 "config": [ 00:20:46.943 { 00:20:46.943 "method": "iobuf_set_options", 00:20:46.943 "params": { 00:20:46.943 "small_pool_count": 8192, 00:20:46.943 "large_pool_count": 1024, 00:20:46.943 "small_bufsize": 8192, 00:20:46.943 "large_bufsize": 135168, 00:20:46.943 "enable_numa": false 00:20:46.943 } 00:20:46.943 } 00:20:46.943 ] 00:20:46.943 }, 00:20:46.943 { 00:20:46.943 "subsystem": "sock", 00:20:46.943 "config": [ 00:20:46.943 { 00:20:46.943 "method": "sock_set_default_impl", 00:20:46.943 "params": { 00:20:46.943 "impl_name": "posix" 00:20:46.943 } 00:20:46.943 }, 00:20:46.943 { 00:20:46.943 "method": "sock_impl_set_options", 00:20:46.943 "params": { 00:20:46.943 "impl_name": "ssl", 00:20:46.943 "recv_buf_size": 4096, 00:20:46.943 "send_buf_size": 4096, 00:20:46.943 "enable_recv_pipe": true, 00:20:46.943 "enable_quickack": false, 00:20:46.943 "enable_placement_id": 0, 00:20:46.943 "enable_zerocopy_send_server": true, 00:20:46.943 "enable_zerocopy_send_client": false, 00:20:46.943 "zerocopy_threshold": 0, 00:20:46.943 "tls_version": 0, 00:20:46.943 "enable_ktls": false 00:20:46.943 } 00:20:46.943 }, 00:20:46.943 { 00:20:46.943 "method": "sock_impl_set_options", 00:20:46.943 "params": { 00:20:46.943 "impl_name": "posix", 00:20:46.943 "recv_buf_size": 2097152, 00:20:46.943 "send_buf_size": 2097152, 00:20:46.943 "enable_recv_pipe": true, 00:20:46.944 "enable_quickack": false, 00:20:46.944 "enable_placement_id": 0, 00:20:46.944 "enable_zerocopy_send_server": true, 00:20:46.944 "enable_zerocopy_send_client": false, 00:20:46.944 "zerocopy_threshold": 0, 00:20:46.944 "tls_version": 0, 00:20:46.944 "enable_ktls": false 00:20:46.944 } 00:20:46.944 } 00:20:46.944 ] 00:20:46.944 }, 00:20:46.944 { 00:20:46.944 "subsystem": "vmd", 00:20:46.944 "config": [] 00:20:46.944 }, 00:20:46.944 { 00:20:46.944 "subsystem": "accel", 00:20:46.944 "config": [ 00:20:46.944 { 00:20:46.944 "method": "accel_set_options", 00:20:46.944 "params": { 00:20:46.944 "small_cache_size": 128, 00:20:46.944 "large_cache_size": 16, 00:20:46.944 "task_count": 2048, 00:20:46.944 "sequence_count": 2048, 00:20:46.944 "buf_count": 2048 00:20:46.944 } 00:20:46.944 } 00:20:46.944 ] 00:20:46.944 }, 00:20:46.944 { 00:20:46.944 "subsystem": "bdev", 00:20:46.944 "config": [ 00:20:46.944 { 00:20:46.944 "method": "bdev_set_options", 00:20:46.944 "params": { 00:20:46.944 "bdev_io_pool_size": 65535, 00:20:46.944 "bdev_io_cache_size": 256, 00:20:46.944 "bdev_auto_examine": true, 00:20:46.944 "iobuf_small_cache_size": 128, 00:20:46.944 "iobuf_large_cache_size": 16 00:20:46.944 } 00:20:46.944 }, 00:20:46.944 { 00:20:46.944 "method": "bdev_raid_set_options", 00:20:46.944 "params": { 00:20:46.944 "process_window_size_kb": 1024, 00:20:46.944 "process_max_bandwidth_mb_sec": 0 00:20:46.944 } 00:20:46.944 }, 00:20:46.944 { 00:20:46.944 "method": "bdev_iscsi_set_options", 00:20:46.944 "params": { 00:20:46.944 "timeout_sec": 30 00:20:46.944 } 00:20:46.944 }, 00:20:46.944 { 00:20:46.944 "method": "bdev_nvme_set_options", 00:20:46.944 "params": { 00:20:46.944 "action_on_timeout": "none", 00:20:46.944 "timeout_us": 0, 00:20:46.944 "timeout_admin_us": 0, 00:20:46.944 "keep_alive_timeout_ms": 10000, 00:20:46.944 "arbitration_burst": 0, 00:20:46.944 "low_priority_weight": 0, 00:20:46.944 "medium_priority_weight": 0, 00:20:46.944 "high_priority_weight": 0, 00:20:46.944 "nvme_adminq_poll_period_us": 10000, 00:20:46.944 "nvme_ioq_poll_period_us": 0, 00:20:46.944 "io_queue_requests": 512, 00:20:46.944 "delay_cmd_submit": true, 00:20:46.944 "transport_retry_count": 4, 00:20:46.944 "bdev_retry_count": 3, 00:20:46.944 "transport_ack_timeout": 0, 00:20:46.944 "ctrlr_loss_timeout_sec": 0, 00:20:46.944 "reconnect_delay_sec": 0, 00:20:46.944 "fast_io_fail_timeout_sec": 0, 00:20:46.944 "disable_auto_failback": false, 00:20:46.944 "generate_uuids": false, 00:20:46.944 "transport_tos": 0, 00:20:46.944 "nvme_error_stat": false, 00:20:46.944 "rdma_srq_size": 0, 00:20:46.944 "io_path_stat": false, 00:20:46.944 "allow_accel_sequence": false, 00:20:46.944 "rdma_max_cq_size": 0, 00:20:46.944 "rdma_cm_event_timeout_ms": 0, 00:20:46.944 "dhchap_digests": [ 00:20:46.944 "sha256", 00:20:46.944 "sha384", 00:20:46.944 "sha512" 00:20:46.944 ], 00:20:46.944 "dhchap_dhgroups": [ 00:20:46.944 "null", 00:20:46.944 "ffdhe2048", 00:20:46.944 "ffdhe3072", 00:20:46.944 "ffdhe4096", 00:20:46.944 "ffdhe6144", 00:20:46.944 "ffdhe8192" 00:20:46.944 ] 00:20:46.944 } 00:20:46.944 }, 00:20:46.944 { 00:20:46.944 "method": "bdev_nvme_attach_controller", 00:20:46.944 "params": { 00:20:46.944 "name": "nvme0", 00:20:46.944 "trtype": "TCP", 00:20:46.944 "adrfam": "IPv4", 00:20:46.944 "traddr": "10.0.0.2", 00:20:46.944 "trsvcid": "4420", 00:20:46.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.944 "prchk_reftag": false, 00:20:46.944 "prchk_guard": false, 00:20:46.944 "ctrlr_loss_timeout_sec": 0, 00:20:46.944 "reconnect_delay_sec": 0, 00:20:46.944 "fast_io_fail_timeout_sec": 0, 00:20:46.944 "psk": "key0", 00:20:46.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.944 "hdgst": false, 00:20:46.944 "ddgst": false, 00:20:46.944 "multipath": "multipath" 00:20:46.944 } 00:20:46.944 }, 00:20:46.944 { 00:20:46.944 "method": "bdev_nvme_set_hotplug", 00:20:46.944 "params": { 00:20:46.944 "period_us": 100000, 00:20:46.944 "enable": false 00:20:46.944 } 00:20:46.944 }, 00:20:46.944 { 00:20:46.944 "method": "bdev_enable_histogram", 00:20:46.944 "params": { 00:20:46.944 "name": "nvme0n1", 00:20:46.944 "enable": true 00:20:46.944 } 00:20:46.944 }, 00:20:46.944 { 00:20:46.944 "method": "bdev_wait_for_examine" 00:20:46.944 } 00:20:46.944 ] 00:20:46.944 }, 00:20:46.944 { 00:20:46.944 "subsystem": "nbd", 00:20:46.944 "config": [] 00:20:46.944 } 00:20:46.944 ] 00:20:46.944 }' 00:20:46.944 [2024-11-20 07:19:09.187129] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:46.944 [2024-11-20 07:19:09.187193] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3544600 ] 00:20:47.241 [2024-11-20 07:19:09.269394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.241 [2024-11-20 07:19:09.299104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.241 [2024-11-20 07:19:09.433460] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.824 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:47.824 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:47.825 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:47.825 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:48.085 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.085 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:48.085 Running I/O for 1 seconds... 00:20:49.029 4745.00 IOPS, 18.54 MiB/s 00:20:49.029 Latency(us) 00:20:49.029 [2024-11-20T06:19:11.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.029 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:49.029 Verification LBA range: start 0x0 length 0x2000 00:20:49.029 nvme0n1 : 1.02 4784.77 18.69 0.00 0.00 26545.57 5270.19 23265.28 00:20:49.029 [2024-11-20T06:19:11.307Z] =================================================================================================================== 00:20:49.029 [2024-11-20T06:19:11.307Z] Total : 4784.77 18.69 0.00 0.00 26545.57 5270.19 23265.28 00:20:49.029 { 00:20:49.029 "results": [ 00:20:49.029 { 00:20:49.029 "job": "nvme0n1", 00:20:49.029 "core_mask": "0x2", 00:20:49.029 "workload": "verify", 00:20:49.029 "status": "finished", 00:20:49.029 "verify_range": { 00:20:49.029 "start": 0, 00:20:49.029 "length": 8192 00:20:49.029 }, 00:20:49.029 "queue_depth": 128, 00:20:49.029 "io_size": 4096, 00:20:49.029 "runtime": 1.018439, 00:20:49.029 "iops": 4784.773560321237, 00:20:49.029 "mibps": 18.69052172000483, 00:20:49.029 "io_failed": 0, 00:20:49.029 "io_timeout": 0, 00:20:49.029 "avg_latency_us": 26545.568282372256, 00:20:49.029 "min_latency_us": 5270.1866666666665, 00:20:49.029 "max_latency_us": 23265.28 00:20:49.029 } 00:20:49.029 ], 00:20:49.029 "core_count": 1 00:20:49.029 } 00:20:49.029 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:49.029 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:49.029 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:49.029 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:20:49.029 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:20:49.029 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:49.029 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:49.029 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:49.029 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:49.029 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:49.029 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:49.029 nvmf_trace.0 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3544600 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3544600 ']' 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3544600 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3544600 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3544600' 00:20:49.291 killing process with pid 3544600 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3544600 00:20:49.291 Received shutdown signal, test time was about 1.000000 seconds 00:20:49.291 00:20:49.291 Latency(us) 00:20:49.291 [2024-11-20T06:19:11.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.291 [2024-11-20T06:19:11.569Z] =================================================================================================================== 00:20:49.291 [2024-11-20T06:19:11.569Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3544600 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:49.291 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:49.291 rmmod nvme_tcp 00:20:49.291 rmmod nvme_fabrics 00:20:49.291 rmmod nvme_keyring 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3544313 ']' 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3544313 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3544313 ']' 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3544313 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3544313 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3544313' 00:20:49.553 killing process with pid 3544313 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3544313 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3544313 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.553 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.119 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:52.119 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.98fprcuXE2 /tmp/tmp.RtBJdQsYYt /tmp/tmp.1WkDES6AI2 00:20:52.119 00:20:52.119 real 1m28.011s 00:20:52.119 user 2m19.838s 00:20:52.119 sys 0m26.533s 00:20:52.119 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:52.119 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.119 ************************************ 00:20:52.119 END TEST nvmf_tls 00:20:52.119 ************************************ 00:20:52.119 07:19:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:52.119 07:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:52.119 07:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:52.119 07:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:52.119 ************************************ 00:20:52.119 START TEST nvmf_fips 00:20:52.119 ************************************ 00:20:52.119 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:52.119 * Looking for test storage... 00:20:52.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:52.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.119 --rc genhtml_branch_coverage=1 00:20:52.119 --rc genhtml_function_coverage=1 00:20:52.119 --rc genhtml_legend=1 00:20:52.119 --rc geninfo_all_blocks=1 00:20:52.119 --rc geninfo_unexecuted_blocks=1 00:20:52.119 00:20:52.119 ' 00:20:52.119 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:52.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.119 --rc genhtml_branch_coverage=1 00:20:52.119 --rc genhtml_function_coverage=1 00:20:52.119 --rc genhtml_legend=1 00:20:52.119 --rc geninfo_all_blocks=1 00:20:52.119 --rc geninfo_unexecuted_blocks=1 00:20:52.120 00:20:52.120 ' 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:52.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.120 --rc genhtml_branch_coverage=1 00:20:52.120 --rc genhtml_function_coverage=1 00:20:52.120 --rc genhtml_legend=1 00:20:52.120 --rc geninfo_all_blocks=1 00:20:52.120 --rc geninfo_unexecuted_blocks=1 00:20:52.120 00:20:52.120 ' 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:52.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.120 --rc genhtml_branch_coverage=1 00:20:52.120 --rc genhtml_function_coverage=1 00:20:52.120 --rc genhtml_legend=1 00:20:52.120 --rc geninfo_all_blocks=1 00:20:52.120 --rc geninfo_unexecuted_blocks=1 00:20:52.120 00:20:52.120 ' 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:52.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:52.120 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:52.121 Error setting digest 00:20:52.121 40B2A5E3DC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:52.121 40B2A5E3DC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:52.121 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:00.258 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:00.258 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.258 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:00.259 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:00.259 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:00.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:21:00.259 00:21:00.259 --- 10.0.0.2 ping statistics --- 00:21:00.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.259 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:21:00.259 00:21:00.259 --- 10.0.0.1 ping statistics --- 00:21:00.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.259 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3549306 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3549306 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3549306 ']' 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:00.259 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:00.259 [2024-11-20 07:19:21.896367] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:21:00.259 [2024-11-20 07:19:21.896440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.259 [2024-11-20 07:19:21.996279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.259 [2024-11-20 07:19:22.046091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.259 [2024-11-20 07:19:22.046145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.259 [2024-11-20 07:19:22.046153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.259 [2024-11-20 07:19:22.046170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.259 [2024-11-20 07:19:22.046177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.259 [2024-11-20 07:19:22.046947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.pT0 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.pT0 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.pT0 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.pT0 00:21:00.522 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:00.783 [2024-11-20 07:19:22.917183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.783 [2024-11-20 07:19:22.933183] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:00.783 [2024-11-20 07:19:22.933534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.783 malloc0 00:21:00.783 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.783 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3549646 00:21:00.783 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3549646 /var/tmp/bdevperf.sock 00:21:00.783 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.783 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3549646 ']' 00:21:00.783 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.783 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:00.783 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.783 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:00.783 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:01.044 [2024-11-20 07:19:23.073287] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:21:01.044 [2024-11-20 07:19:23.073366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3549646 ] 00:21:01.044 [2024-11-20 07:19:23.168563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.044 [2024-11-20 07:19:23.219759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.617 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:01.617 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:21:01.617 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.pT0 00:21:01.878 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:02.139 [2024-11-20 07:19:24.229322] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.139 TLSTESTn1 00:21:02.139 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:02.400 Running I/O for 10 seconds... 00:21:04.287 3379.00 IOPS, 13.20 MiB/s [2024-11-20T06:19:27.507Z] 4423.50 IOPS, 17.28 MiB/s [2024-11-20T06:19:28.447Z] 4741.33 IOPS, 18.52 MiB/s [2024-11-20T06:19:29.835Z] 4911.00 IOPS, 19.18 MiB/s [2024-11-20T06:19:30.775Z] 4946.60 IOPS, 19.32 MiB/s [2024-11-20T06:19:31.716Z] 4834.67 IOPS, 18.89 MiB/s [2024-11-20T06:19:32.656Z] 5022.57 IOPS, 19.62 MiB/s [2024-11-20T06:19:33.596Z] 5164.00 IOPS, 20.17 MiB/s [2024-11-20T06:19:34.538Z] 5149.11 IOPS, 20.11 MiB/s [2024-11-20T06:19:34.538Z] 5096.20 IOPS, 19.91 MiB/s 00:21:12.260 Latency(us) 00:21:12.260 [2024-11-20T06:19:34.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.260 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:12.260 Verification LBA range: start 0x0 length 0x2000 00:21:12.260 TLSTESTn1 : 10.04 5090.48 19.88 0.00 0.00 25089.97 6062.08 35607.89 00:21:12.260 [2024-11-20T06:19:34.538Z] =================================================================================================================== 00:21:12.260 [2024-11-20T06:19:34.538Z] Total : 5090.48 19.88 0.00 0.00 25089.97 6062.08 35607.89 00:21:12.260 { 00:21:12.260 "results": [ 00:21:12.260 { 00:21:12.260 "job": "TLSTESTn1", 00:21:12.260 "core_mask": "0x4", 00:21:12.260 "workload": "verify", 00:21:12.260 "status": "finished", 00:21:12.260 "verify_range": { 00:21:12.260 "start": 0, 00:21:12.260 "length": 8192 00:21:12.260 }, 00:21:12.260 "queue_depth": 128, 00:21:12.260 "io_size": 4096, 00:21:12.260 "runtime": 10.036389, 00:21:12.260 "iops": 5090.476265915959, 00:21:12.260 "mibps": 19.884672913734214, 00:21:12.260 "io_failed": 0, 00:21:12.260 "io_timeout": 0, 00:21:12.260 "avg_latency_us": 25089.97490963659, 00:21:12.260 "min_latency_us": 6062.08, 00:21:12.260 "max_latency_us": 35607.89333333333 00:21:12.260 } 00:21:12.260 ], 00:21:12.260 "core_count": 1 00:21:12.260 } 00:21:12.260 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:12.260 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:12.260 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:21:12.260 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:21:12.260 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:21:12.260 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:12.260 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:21:12.260 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:21:12.260 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:21:12.260 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:12.260 nvmf_trace.0 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3549646 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3549646 ']' 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3549646 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3549646 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3549646' 00:21:12.521 killing process with pid 3549646 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3549646 00:21:12.521 Received shutdown signal, test time was about 10.000000 seconds 00:21:12.521 00:21:12.521 Latency(us) 00:21:12.521 [2024-11-20T06:19:34.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.521 [2024-11-20T06:19:34.799Z] =================================================================================================================== 00:21:12.521 [2024-11-20T06:19:34.799Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3549646 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:12.521 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:12.521 rmmod nvme_tcp 00:21:12.781 rmmod nvme_fabrics 00:21:12.781 rmmod nvme_keyring 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3549306 ']' 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3549306 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3549306 ']' 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3549306 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3549306 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3549306' 00:21:12.781 killing process with pid 3549306 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3549306 00:21:12.781 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3549306 00:21:12.781 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:12.781 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:12.781 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:12.781 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:12.781 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:12.781 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:12.781 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:12.781 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:12.781 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:12.781 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.781 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.781 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.pT0 00:21:15.326 00:21:15.326 real 0m23.179s 00:21:15.326 user 0m24.878s 00:21:15.326 sys 0m9.676s 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:15.326 ************************************ 00:21:15.326 END TEST nvmf_fips 00:21:15.326 ************************************ 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:15.326 ************************************ 00:21:15.326 START TEST nvmf_control_msg_list 00:21:15.326 ************************************ 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:15.326 * Looking for test storage... 00:21:15.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:15.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.326 --rc genhtml_branch_coverage=1 00:21:15.326 --rc genhtml_function_coverage=1 00:21:15.326 --rc genhtml_legend=1 00:21:15.326 --rc geninfo_all_blocks=1 00:21:15.326 --rc geninfo_unexecuted_blocks=1 00:21:15.326 00:21:15.326 ' 00:21:15.326 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:15.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.326 --rc genhtml_branch_coverage=1 00:21:15.326 --rc genhtml_function_coverage=1 00:21:15.326 --rc genhtml_legend=1 00:21:15.327 --rc geninfo_all_blocks=1 00:21:15.327 --rc geninfo_unexecuted_blocks=1 00:21:15.327 00:21:15.327 ' 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:15.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.327 --rc genhtml_branch_coverage=1 00:21:15.327 --rc genhtml_function_coverage=1 00:21:15.327 --rc genhtml_legend=1 00:21:15.327 --rc geninfo_all_blocks=1 00:21:15.327 --rc geninfo_unexecuted_blocks=1 00:21:15.327 00:21:15.327 ' 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:15.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.327 --rc genhtml_branch_coverage=1 00:21:15.327 --rc genhtml_function_coverage=1 00:21:15.327 --rc genhtml_legend=1 00:21:15.327 --rc geninfo_all_blocks=1 00:21:15.327 --rc geninfo_unexecuted_blocks=1 00:21:15.327 00:21:15.327 ' 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:15.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:15.327 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:23.468 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:23.468 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:23.469 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:23.469 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:23.469 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:23.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:21:23.469 00:21:23.469 --- 10.0.0.2 ping statistics --- 00:21:23.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.469 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:21:23.469 00:21:23.469 --- 10.0.0.1 ping statistics --- 00:21:23.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.469 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3556021 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3556021 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 3556021 ']' 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:23.469 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:23.469 [2024-11-20 07:19:45.007309] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:21:23.469 [2024-11-20 07:19:45.007374] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.469 [2024-11-20 07:19:45.107525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.469 [2024-11-20 07:19:45.158038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.469 [2024-11-20 07:19:45.158094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.469 [2024-11-20 07:19:45.158102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.469 [2024-11-20 07:19:45.158109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.469 [2024-11-20 07:19:45.158116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.469 [2024-11-20 07:19:45.158863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:23.730 [2024-11-20 07:19:45.872943] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:23.730 Malloc0 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:23.730 [2024-11-20 07:19:45.927514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3556328 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3556330 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3556332 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3556328 00:21:23.730 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:23.990 [2024-11-20 07:19:46.028359] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:23.990 [2024-11-20 07:19:46.028725] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:23.990 [2024-11-20 07:19:46.029087] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:24.929 Initializing NVMe Controllers 00:21:24.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:24.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:24.929 Initialization complete. Launching workers. 00:21:24.929 ======================================================== 00:21:24.929 Latency(us) 00:21:24.929 Device Information : IOPS MiB/s Average min max 00:21:24.929 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1468.00 5.73 681.32 291.45 914.51 00:21:24.929 ======================================================== 00:21:24.929 Total : 1468.00 5.73 681.32 291.45 914.51 00:21:24.929 00:21:24.929 Initializing NVMe Controllers 00:21:24.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:24.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:24.929 Initialization complete. Launching workers. 00:21:24.929 ======================================================== 00:21:24.929 Latency(us) 00:21:24.929 Device Information : IOPS MiB/s Average min max 00:21:24.929 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1482.00 5.79 674.56 176.38 857.38 00:21:24.929 ======================================================== 00:21:24.929 Total : 1482.00 5.79 674.56 176.38 857.38 00:21:24.929 00:21:24.929 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3556330 00:21:24.929 Initializing NVMe Controllers 00:21:24.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:24.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:24.930 Initialization complete. Launching workers. 00:21:24.930 ======================================================== 00:21:24.930 Latency(us) 00:21:24.930 Device Information : IOPS MiB/s Average min max 00:21:24.930 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40899.34 40776.12 41024.65 00:21:24.930 ======================================================== 00:21:24.930 Total : 25.00 0.10 40899.34 40776.12 41024.65 00:21:24.930 00:21:24.930 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3556332 00:21:24.930 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:24.930 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:24.930 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:24.930 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:24.930 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:24.930 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:24.930 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:24.930 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:24.930 rmmod nvme_tcp 00:21:25.190 rmmod nvme_fabrics 00:21:25.190 rmmod nvme_keyring 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3556021 ']' 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3556021 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 3556021 ']' 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 3556021 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3556021 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3556021' 00:21:25.190 killing process with pid 3556021 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 3556021 00:21:25.190 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 3556021 00:21:25.450 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:25.450 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.450 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.450 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:25.450 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:25.450 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.450 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.450 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.450 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:25.450 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.450 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.450 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.422 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:27.422 00:21:27.422 real 0m12.388s 00:21:27.422 user 0m7.877s 00:21:27.422 sys 0m6.517s 00:21:27.422 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:27.422 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:27.422 ************************************ 00:21:27.422 END TEST nvmf_control_msg_list 00:21:27.422 ************************************ 00:21:27.422 07:19:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:27.422 07:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:27.422 07:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:27.422 07:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:27.422 ************************************ 00:21:27.422 START TEST nvmf_wait_for_buf 00:21:27.422 ************************************ 00:21:27.422 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:27.683 * Looking for test storage... 00:21:27.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:27.683 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:27.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.684 --rc genhtml_branch_coverage=1 00:21:27.684 --rc genhtml_function_coverage=1 00:21:27.684 --rc genhtml_legend=1 00:21:27.684 --rc geninfo_all_blocks=1 00:21:27.684 --rc geninfo_unexecuted_blocks=1 00:21:27.684 00:21:27.684 ' 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:27.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.684 --rc genhtml_branch_coverage=1 00:21:27.684 --rc genhtml_function_coverage=1 00:21:27.684 --rc genhtml_legend=1 00:21:27.684 --rc geninfo_all_blocks=1 00:21:27.684 --rc geninfo_unexecuted_blocks=1 00:21:27.684 00:21:27.684 ' 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:27.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.684 --rc genhtml_branch_coverage=1 00:21:27.684 --rc genhtml_function_coverage=1 00:21:27.684 --rc genhtml_legend=1 00:21:27.684 --rc geninfo_all_blocks=1 00:21:27.684 --rc geninfo_unexecuted_blocks=1 00:21:27.684 00:21:27.684 ' 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:27.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.684 --rc genhtml_branch_coverage=1 00:21:27.684 --rc genhtml_function_coverage=1 00:21:27.684 --rc genhtml_legend=1 00:21:27.684 --rc geninfo_all_blocks=1 00:21:27.684 --rc geninfo_unexecuted_blocks=1 00:21:27.684 00:21:27.684 ' 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:27.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:27.684 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:27.685 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:27.685 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.685 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:27.685 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:27.685 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:27.685 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.685 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.685 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.685 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:27.685 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:27.685 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:27.685 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:35.828 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:35.828 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.828 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:35.829 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:35.829 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:35.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:21:35.829 00:21:35.829 --- 10.0.0.2 ping statistics --- 00:21:35.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.829 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:21:35.829 00:21:35.829 --- 10.0.0.1 ping statistics --- 00:21:35.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.829 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3560706 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3560706 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 3560706 ']' 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:35.829 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:35.829 [2024-11-20 07:19:57.436467] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:21:35.829 [2024-11-20 07:19:57.436533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.829 [2024-11-20 07:19:57.536111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.830 [2024-11-20 07:19:57.586670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.830 [2024-11-20 07:19:57.586720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.830 [2024-11-20 07:19:57.586729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.830 [2024-11-20 07:19:57.586736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.830 [2024-11-20 07:19:57.586742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.830 [2024-11-20 07:19:57.587510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.162 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.457 Malloc0 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.457 [2024-11-20 07:19:58.416151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.457 [2024-11-20 07:19:58.452482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.457 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:36.457 [2024-11-20 07:19:58.558267] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:37.846 Initializing NVMe Controllers 00:21:37.846 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:37.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:37.846 Initialization complete. Launching workers. 00:21:37.846 ======================================================== 00:21:37.846 Latency(us) 00:21:37.846 Device Information : IOPS MiB/s Average min max 00:21:37.846 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32263.90 8000.89 63856.09 00:21:37.846 ======================================================== 00:21:37.846 Total : 129.00 16.12 32263.90 8000.89 63856.09 00:21:37.846 00:21:37.846 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:37.846 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:37.846 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.846 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:38.106 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.106 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:38.106 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:38.106 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:38.106 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:38.106 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:38.106 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:38.106 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.107 rmmod nvme_tcp 00:21:38.107 rmmod nvme_fabrics 00:21:38.107 rmmod nvme_keyring 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3560706 ']' 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3560706 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 3560706 ']' 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 3560706 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3560706 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3560706' 00:21:38.107 killing process with pid 3560706 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 3560706 00:21:38.107 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 3560706 00:21:38.367 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:38.368 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:38.368 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:38.368 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:38.368 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:38.368 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:38.368 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:38.368 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:38.368 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:38.368 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.368 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.368 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.281 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:40.542 00:21:40.542 real 0m12.891s 00:21:40.542 user 0m5.261s 00:21:40.542 sys 0m6.217s 00:21:40.542 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:40.542 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:40.542 ************************************ 00:21:40.542 END TEST nvmf_wait_for_buf 00:21:40.542 ************************************ 00:21:40.542 07:20:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:40.542 07:20:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:40.542 07:20:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:40.542 07:20:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:40.542 07:20:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:40.542 07:20:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:48.689 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:48.690 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:48.690 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:48.690 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:48.690 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:48.690 ************************************ 00:21:48.690 START TEST nvmf_perf_adq 00:21:48.690 ************************************ 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:48.690 * Looking for test storage... 00:21:48.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:48.690 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:48.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.690 --rc genhtml_branch_coverage=1 00:21:48.690 --rc genhtml_function_coverage=1 00:21:48.690 --rc genhtml_legend=1 00:21:48.690 --rc geninfo_all_blocks=1 00:21:48.690 --rc geninfo_unexecuted_blocks=1 00:21:48.690 00:21:48.690 ' 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:48.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.690 --rc genhtml_branch_coverage=1 00:21:48.690 --rc genhtml_function_coverage=1 00:21:48.690 --rc genhtml_legend=1 00:21:48.690 --rc geninfo_all_blocks=1 00:21:48.690 --rc geninfo_unexecuted_blocks=1 00:21:48.690 00:21:48.690 ' 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:48.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.690 --rc genhtml_branch_coverage=1 00:21:48.690 --rc genhtml_function_coverage=1 00:21:48.690 --rc genhtml_legend=1 00:21:48.690 --rc geninfo_all_blocks=1 00:21:48.690 --rc geninfo_unexecuted_blocks=1 00:21:48.690 00:21:48.690 ' 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:48.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.690 --rc genhtml_branch_coverage=1 00:21:48.690 --rc genhtml_function_coverage=1 00:21:48.690 --rc genhtml_legend=1 00:21:48.690 --rc geninfo_all_blocks=1 00:21:48.690 --rc geninfo_unexecuted_blocks=1 00:21:48.690 00:21:48.690 ' 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.690 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:48.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:48.691 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:55.278 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:55.278 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:55.278 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:55.279 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:55.279 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:55.279 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:56.662 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:59.206 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:04.492 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:04.492 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:04.492 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:04.492 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.492 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.493 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:04.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:22:04.493 00:22:04.493 --- 10.0.0.2 ping statistics --- 00:22:04.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.493 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:22:04.493 00:22:04.493 --- 10.0.0.1 ping statistics --- 00:22:04.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.493 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3570946 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3570946 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3570946 ']' 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:04.493 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.493 [2024-11-20 07:20:26.326487] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:22:04.493 [2024-11-20 07:20:26.326583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.493 [2024-11-20 07:20:26.428239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.493 [2024-11-20 07:20:26.482087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.493 [2024-11-20 07:20:26.482167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.493 [2024-11-20 07:20:26.482176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.493 [2024-11-20 07:20:26.482183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.493 [2024-11-20 07:20:26.482190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.493 [2024-11-20 07:20:26.484579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.493 [2024-11-20 07:20:26.484754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.493 [2024-11-20 07:20:26.484895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:04.493 [2024-11-20 07:20:26.484896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.064 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:05.065 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.065 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.065 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.065 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:05.065 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.065 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.325 [2024-11-20 07:20:27.341054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.325 Malloc1 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.325 [2024-11-20 07:20:27.422329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3571299 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:05.325 07:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:07.235 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:07.235 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.235 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.235 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.235 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:07.235 "tick_rate": 2400000000, 00:22:07.235 "poll_groups": [ 00:22:07.235 { 00:22:07.235 "name": "nvmf_tgt_poll_group_000", 00:22:07.235 "admin_qpairs": 1, 00:22:07.235 "io_qpairs": 1, 00:22:07.235 "current_admin_qpairs": 1, 00:22:07.235 "current_io_qpairs": 1, 00:22:07.235 "pending_bdev_io": 0, 00:22:07.235 "completed_nvme_io": 16656, 00:22:07.235 "transports": [ 00:22:07.235 { 00:22:07.235 "trtype": "TCP" 00:22:07.235 } 00:22:07.235 ] 00:22:07.235 }, 00:22:07.235 { 00:22:07.235 "name": "nvmf_tgt_poll_group_001", 00:22:07.235 "admin_qpairs": 0, 00:22:07.235 "io_qpairs": 1, 00:22:07.235 "current_admin_qpairs": 0, 00:22:07.235 "current_io_qpairs": 1, 00:22:07.235 "pending_bdev_io": 0, 00:22:07.235 "completed_nvme_io": 18432, 00:22:07.235 "transports": [ 00:22:07.235 { 00:22:07.235 "trtype": "TCP" 00:22:07.235 } 00:22:07.235 ] 00:22:07.235 }, 00:22:07.235 { 00:22:07.235 "name": "nvmf_tgt_poll_group_002", 00:22:07.235 "admin_qpairs": 0, 00:22:07.235 "io_qpairs": 1, 00:22:07.235 "current_admin_qpairs": 0, 00:22:07.235 "current_io_qpairs": 1, 00:22:07.235 "pending_bdev_io": 0, 00:22:07.235 "completed_nvme_io": 19031, 00:22:07.235 "transports": [ 00:22:07.235 { 00:22:07.235 "trtype": "TCP" 00:22:07.235 } 00:22:07.235 ] 00:22:07.235 }, 00:22:07.235 { 00:22:07.235 "name": "nvmf_tgt_poll_group_003", 00:22:07.235 "admin_qpairs": 0, 00:22:07.235 "io_qpairs": 1, 00:22:07.235 "current_admin_qpairs": 0, 00:22:07.235 "current_io_qpairs": 1, 00:22:07.235 "pending_bdev_io": 0, 00:22:07.235 "completed_nvme_io": 16181, 00:22:07.235 "transports": [ 00:22:07.235 { 00:22:07.235 "trtype": "TCP" 00:22:07.235 } 00:22:07.235 ] 00:22:07.235 } 00:22:07.235 ] 00:22:07.235 }' 00:22:07.235 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:07.235 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:07.236 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:07.236 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:07.236 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3571299 00:22:15.367 Initializing NVMe Controllers 00:22:15.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:15.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:15.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:15.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:15.367 Initialization complete. Launching workers. 00:22:15.367 ======================================================== 00:22:15.367 Latency(us) 00:22:15.367 Device Information : IOPS MiB/s Average min max 00:22:15.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12741.70 49.77 5023.34 1195.16 11836.62 00:22:15.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13271.10 51.84 4823.23 1235.61 14550.03 00:22:15.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13628.70 53.24 4696.43 1232.51 14026.51 00:22:15.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12286.70 47.99 5221.18 1221.55 45077.89 00:22:15.367 ======================================================== 00:22:15.367 Total : 51928.19 202.84 4933.21 1195.16 45077.89 00:22:15.367 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:15.627 rmmod nvme_tcp 00:22:15.627 rmmod nvme_fabrics 00:22:15.627 rmmod nvme_keyring 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3570946 ']' 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3570946 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3570946 ']' 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3570946 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3570946 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3570946' 00:22:15.627 killing process with pid 3570946 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3570946 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3570946 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:15.627 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:15.888 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:15.888 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:15.888 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:15.888 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:15.888 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:15.888 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:15.888 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.888 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.888 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.815 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:17.815 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:17.815 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:17.815 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:19.724 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:21.637 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:26.933 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:26.933 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.933 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:26.934 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:26.934 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.934 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:26.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:22:26.934 00:22:26.934 --- 10.0.0.2 ping statistics --- 00:22:26.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.934 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:22:26.934 00:22:26.934 --- 10.0.0.1 ping statistics --- 00:22:26.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.934 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:26.934 net.core.busy_poll = 1 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:26.934 net.core.busy_read = 1 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:26.934 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3575773 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3575773 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3575773 ']' 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:27.195 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.195 [2024-11-20 07:20:49.407121] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:22:27.195 [2024-11-20 07:20:49.407203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.455 [2024-11-20 07:20:49.507727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.455 [2024-11-20 07:20:49.560863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.455 [2024-11-20 07:20:49.560918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.455 [2024-11-20 07:20:49.560928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.455 [2024-11-20 07:20:49.560935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.455 [2024-11-20 07:20:49.560942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.455 [2024-11-20 07:20:49.563135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.455 [2024-11-20 07:20:49.563294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.455 [2024-11-20 07:20:49.563613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.455 [2024-11-20 07:20:49.563616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.029 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:28.029 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:28.029 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:28.029 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:28.029 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.029 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.029 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:28.029 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:28.029 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:28.029 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.029 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.029 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.290 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:28.290 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.291 [2024-11-20 07:20:50.435924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.291 Malloc1 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.291 [2024-11-20 07:20:50.511422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3576126 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:28.291 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:30.839 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:30.839 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.839 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.839 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.839 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:30.839 "tick_rate": 2400000000, 00:22:30.839 "poll_groups": [ 00:22:30.839 { 00:22:30.839 "name": "nvmf_tgt_poll_group_000", 00:22:30.839 "admin_qpairs": 1, 00:22:30.839 "io_qpairs": 1, 00:22:30.839 "current_admin_qpairs": 1, 00:22:30.839 "current_io_qpairs": 1, 00:22:30.839 "pending_bdev_io": 0, 00:22:30.839 "completed_nvme_io": 24748, 00:22:30.839 "transports": [ 00:22:30.839 { 00:22:30.839 "trtype": "TCP" 00:22:30.839 } 00:22:30.839 ] 00:22:30.839 }, 00:22:30.839 { 00:22:30.839 "name": "nvmf_tgt_poll_group_001", 00:22:30.839 "admin_qpairs": 0, 00:22:30.839 "io_qpairs": 3, 00:22:30.839 "current_admin_qpairs": 0, 00:22:30.839 "current_io_qpairs": 3, 00:22:30.839 "pending_bdev_io": 0, 00:22:30.839 "completed_nvme_io": 29791, 00:22:30.839 "transports": [ 00:22:30.839 { 00:22:30.839 "trtype": "TCP" 00:22:30.839 } 00:22:30.839 ] 00:22:30.839 }, 00:22:30.839 { 00:22:30.839 "name": "nvmf_tgt_poll_group_002", 00:22:30.839 "admin_qpairs": 0, 00:22:30.839 "io_qpairs": 0, 00:22:30.839 "current_admin_qpairs": 0, 00:22:30.839 "current_io_qpairs": 0, 00:22:30.839 "pending_bdev_io": 0, 00:22:30.839 "completed_nvme_io": 0, 00:22:30.839 "transports": [ 00:22:30.839 { 00:22:30.839 "trtype": "TCP" 00:22:30.839 } 00:22:30.839 ] 00:22:30.839 }, 00:22:30.839 { 00:22:30.839 "name": "nvmf_tgt_poll_group_003", 00:22:30.839 "admin_qpairs": 0, 00:22:30.839 "io_qpairs": 0, 00:22:30.839 "current_admin_qpairs": 0, 00:22:30.839 "current_io_qpairs": 0, 00:22:30.839 "pending_bdev_io": 0, 00:22:30.839 "completed_nvme_io": 0, 00:22:30.839 "transports": [ 00:22:30.839 { 00:22:30.839 "trtype": "TCP" 00:22:30.839 } 00:22:30.839 ] 00:22:30.839 } 00:22:30.839 ] 00:22:30.839 }' 00:22:30.839 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:30.839 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:30.839 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:30.839 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:30.839 07:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3576126 00:22:38.981 Initializing NVMe Controllers 00:22:38.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:38.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:38.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:38.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:38.981 Initialization complete. Launching workers. 00:22:38.981 ======================================================== 00:22:38.981 Latency(us) 00:22:38.981 Device Information : IOPS MiB/s Average min max 00:22:38.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6861.80 26.80 9364.47 1073.57 59600.80 00:22:38.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15389.50 60.12 4157.97 972.76 45575.07 00:22:38.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6585.70 25.73 9720.44 1263.14 59969.79 00:22:38.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7072.20 27.63 9052.31 990.76 56224.55 00:22:38.982 ======================================================== 00:22:38.982 Total : 35909.20 140.27 7136.94 972.76 59969.79 00:22:38.982 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:38.982 rmmod nvme_tcp 00:22:38.982 rmmod nvme_fabrics 00:22:38.982 rmmod nvme_keyring 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3575773 ']' 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3575773 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3575773 ']' 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3575773 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3575773 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3575773' 00:22:38.982 killing process with pid 3575773 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3575773 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3575773 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.982 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:42.282 00:22:42.282 real 0m54.280s 00:22:42.282 user 2m50.463s 00:22:42.282 sys 0m11.586s 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:42.282 ************************************ 00:22:42.282 END TEST nvmf_perf_adq 00:22:42.282 ************************************ 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:42.282 ************************************ 00:22:42.282 START TEST nvmf_shutdown 00:22:42.282 ************************************ 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:42.282 * Looking for test storage... 00:22:42.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:42.282 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:42.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.283 --rc genhtml_branch_coverage=1 00:22:42.283 --rc genhtml_function_coverage=1 00:22:42.283 --rc genhtml_legend=1 00:22:42.283 --rc geninfo_all_blocks=1 00:22:42.283 --rc geninfo_unexecuted_blocks=1 00:22:42.283 00:22:42.283 ' 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:42.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.283 --rc genhtml_branch_coverage=1 00:22:42.283 --rc genhtml_function_coverage=1 00:22:42.283 --rc genhtml_legend=1 00:22:42.283 --rc geninfo_all_blocks=1 00:22:42.283 --rc geninfo_unexecuted_blocks=1 00:22:42.283 00:22:42.283 ' 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:42.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.283 --rc genhtml_branch_coverage=1 00:22:42.283 --rc genhtml_function_coverage=1 00:22:42.283 --rc genhtml_legend=1 00:22:42.283 --rc geninfo_all_blocks=1 00:22:42.283 --rc geninfo_unexecuted_blocks=1 00:22:42.283 00:22:42.283 ' 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:42.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.283 --rc genhtml_branch_coverage=1 00:22:42.283 --rc genhtml_function_coverage=1 00:22:42.283 --rc genhtml_legend=1 00:22:42.283 --rc geninfo_all_blocks=1 00:22:42.283 --rc geninfo_unexecuted_blocks=1 00:22:42.283 00:22:42.283 ' 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:42.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:42.283 ************************************ 00:22:42.283 START TEST nvmf_shutdown_tc1 00:22:42.283 ************************************ 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.283 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.422 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:50.423 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:50.423 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:50.423 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:50.423 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:22:50.423 00:22:50.423 --- 10.0.0.2 ping statistics --- 00:22:50.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.423 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:22:50.423 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:22:50.423 00:22:50.423 --- 10.0.0.1 ping statistics --- 00:22:50.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.423 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:22:50.423 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.423 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:50.423 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:50.423 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.423 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3583152 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3583152 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3583152 ']' 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:50.424 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.424 [2024-11-20 07:21:12.120280] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:22:50.424 [2024-11-20 07:21:12.120348] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.424 [2024-11-20 07:21:12.220499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.424 [2024-11-20 07:21:12.272204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.424 [2024-11-20 07:21:12.272253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.424 [2024-11-20 07:21:12.272262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.424 [2024-11-20 07:21:12.272270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.424 [2024-11-20 07:21:12.272276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.424 [2024-11-20 07:21:12.274644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.424 [2024-11-20 07:21:12.274805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.424 [2024-11-20 07:21:12.274967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.424 [2024-11-20 07:21:12.274967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:50.686 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:50.686 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:50.686 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:50.686 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.686 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.947 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.948 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.948 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.948 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.948 [2024-11-20 07:21:13.006803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.948 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.948 Malloc1 00:22:50.948 [2024-11-20 07:21:13.128801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.948 Malloc2 00:22:50.948 Malloc3 00:22:51.209 Malloc4 00:22:51.209 Malloc5 00:22:51.209 Malloc6 00:22:51.209 Malloc7 00:22:51.209 Malloc8 00:22:51.471 Malloc9 00:22:51.471 Malloc10 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3583537 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3583537 /var/tmp/bdevperf.sock 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3583537 ']' 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.471 { 00:22:51.471 "params": { 00:22:51.471 "name": "Nvme$subsystem", 00:22:51.471 "trtype": "$TEST_TRANSPORT", 00:22:51.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.471 "adrfam": "ipv4", 00:22:51.471 "trsvcid": "$NVMF_PORT", 00:22:51.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.471 "hdgst": ${hdgst:-false}, 00:22:51.471 "ddgst": ${ddgst:-false} 00:22:51.471 }, 00:22:51.471 "method": "bdev_nvme_attach_controller" 00:22:51.471 } 00:22:51.471 EOF 00:22:51.471 )") 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.471 { 00:22:51.471 "params": { 00:22:51.471 "name": "Nvme$subsystem", 00:22:51.471 "trtype": "$TEST_TRANSPORT", 00:22:51.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.471 "adrfam": "ipv4", 00:22:51.471 "trsvcid": "$NVMF_PORT", 00:22:51.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.471 "hdgst": ${hdgst:-false}, 00:22:51.471 "ddgst": ${ddgst:-false} 00:22:51.471 }, 00:22:51.471 "method": "bdev_nvme_attach_controller" 00:22:51.471 } 00:22:51.471 EOF 00:22:51.471 )") 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.471 { 00:22:51.471 "params": { 00:22:51.471 "name": "Nvme$subsystem", 00:22:51.471 "trtype": "$TEST_TRANSPORT", 00:22:51.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.471 "adrfam": "ipv4", 00:22:51.471 "trsvcid": "$NVMF_PORT", 00:22:51.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.471 "hdgst": ${hdgst:-false}, 00:22:51.471 "ddgst": ${ddgst:-false} 00:22:51.471 }, 00:22:51.471 "method": "bdev_nvme_attach_controller" 00:22:51.471 } 00:22:51.471 EOF 00:22:51.471 )") 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.471 { 00:22:51.471 "params": { 00:22:51.471 "name": "Nvme$subsystem", 00:22:51.471 "trtype": "$TEST_TRANSPORT", 00:22:51.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.471 "adrfam": "ipv4", 00:22:51.471 "trsvcid": "$NVMF_PORT", 00:22:51.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.471 "hdgst": ${hdgst:-false}, 00:22:51.471 "ddgst": ${ddgst:-false} 00:22:51.471 }, 00:22:51.471 "method": "bdev_nvme_attach_controller" 00:22:51.471 } 00:22:51.471 EOF 00:22:51.471 )") 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.471 { 00:22:51.471 "params": { 00:22:51.471 "name": "Nvme$subsystem", 00:22:51.471 "trtype": "$TEST_TRANSPORT", 00:22:51.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.471 "adrfam": "ipv4", 00:22:51.471 "trsvcid": "$NVMF_PORT", 00:22:51.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.471 "hdgst": ${hdgst:-false}, 00:22:51.471 "ddgst": ${ddgst:-false} 00:22:51.471 }, 00:22:51.471 "method": "bdev_nvme_attach_controller" 00:22:51.471 } 00:22:51.471 EOF 00:22:51.471 )") 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.471 { 00:22:51.471 "params": { 00:22:51.471 "name": "Nvme$subsystem", 00:22:51.471 "trtype": "$TEST_TRANSPORT", 00:22:51.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.471 "adrfam": "ipv4", 00:22:51.471 "trsvcid": "$NVMF_PORT", 00:22:51.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.471 "hdgst": ${hdgst:-false}, 00:22:51.471 "ddgst": ${ddgst:-false} 00:22:51.471 }, 00:22:51.471 "method": "bdev_nvme_attach_controller" 00:22:51.471 } 00:22:51.471 EOF 00:22:51.471 )") 00:22:51.471 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.471 [2024-11-20 07:21:13.653152] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:22:51.472 [2024-11-20 07:21:13.653232] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.472 { 00:22:51.472 "params": { 00:22:51.472 "name": "Nvme$subsystem", 00:22:51.472 "trtype": "$TEST_TRANSPORT", 00:22:51.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.472 "adrfam": "ipv4", 00:22:51.472 "trsvcid": "$NVMF_PORT", 00:22:51.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.472 "hdgst": ${hdgst:-false}, 00:22:51.472 "ddgst": ${ddgst:-false} 00:22:51.472 }, 00:22:51.472 "method": "bdev_nvme_attach_controller" 00:22:51.472 } 00:22:51.472 EOF 00:22:51.472 )") 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.472 { 00:22:51.472 "params": { 00:22:51.472 "name": "Nvme$subsystem", 00:22:51.472 "trtype": "$TEST_TRANSPORT", 00:22:51.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.472 "adrfam": "ipv4", 00:22:51.472 "trsvcid": "$NVMF_PORT", 00:22:51.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.472 "hdgst": ${hdgst:-false}, 00:22:51.472 "ddgst": ${ddgst:-false} 00:22:51.472 }, 00:22:51.472 "method": "bdev_nvme_attach_controller" 00:22:51.472 } 00:22:51.472 EOF 00:22:51.472 )") 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.472 { 00:22:51.472 "params": { 00:22:51.472 "name": "Nvme$subsystem", 00:22:51.472 "trtype": "$TEST_TRANSPORT", 00:22:51.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.472 "adrfam": "ipv4", 00:22:51.472 "trsvcid": "$NVMF_PORT", 00:22:51.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.472 "hdgst": ${hdgst:-false}, 00:22:51.472 "ddgst": ${ddgst:-false} 00:22:51.472 }, 00:22:51.472 "method": "bdev_nvme_attach_controller" 00:22:51.472 } 00:22:51.472 EOF 00:22:51.472 )") 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.472 { 00:22:51.472 "params": { 00:22:51.472 "name": "Nvme$subsystem", 00:22:51.472 "trtype": "$TEST_TRANSPORT", 00:22:51.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.472 "adrfam": "ipv4", 00:22:51.472 "trsvcid": "$NVMF_PORT", 00:22:51.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.472 "hdgst": ${hdgst:-false}, 00:22:51.472 "ddgst": ${ddgst:-false} 00:22:51.472 }, 00:22:51.472 "method": "bdev_nvme_attach_controller" 00:22:51.472 } 00:22:51.472 EOF 00:22:51.472 )") 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:51.472 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:51.472 "params": { 00:22:51.472 "name": "Nvme1", 00:22:51.472 "trtype": "tcp", 00:22:51.472 "traddr": "10.0.0.2", 00:22:51.472 "adrfam": "ipv4", 00:22:51.472 "trsvcid": "4420", 00:22:51.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.472 "hdgst": false, 00:22:51.472 "ddgst": false 00:22:51.472 }, 00:22:51.472 "method": "bdev_nvme_attach_controller" 00:22:51.472 },{ 00:22:51.472 "params": { 00:22:51.472 "name": "Nvme2", 00:22:51.472 "trtype": "tcp", 00:22:51.472 "traddr": "10.0.0.2", 00:22:51.472 "adrfam": "ipv4", 00:22:51.472 "trsvcid": "4420", 00:22:51.472 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:51.472 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:51.472 "hdgst": false, 00:22:51.472 "ddgst": false 00:22:51.472 }, 00:22:51.472 "method": "bdev_nvme_attach_controller" 00:22:51.472 },{ 00:22:51.472 "params": { 00:22:51.472 "name": "Nvme3", 00:22:51.472 "trtype": "tcp", 00:22:51.472 "traddr": "10.0.0.2", 00:22:51.472 "adrfam": "ipv4", 00:22:51.472 "trsvcid": "4420", 00:22:51.472 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:51.472 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:51.472 "hdgst": false, 00:22:51.472 "ddgst": false 00:22:51.472 }, 00:22:51.472 "method": "bdev_nvme_attach_controller" 00:22:51.472 },{ 00:22:51.472 "params": { 00:22:51.472 "name": "Nvme4", 00:22:51.472 "trtype": "tcp", 00:22:51.472 "traddr": "10.0.0.2", 00:22:51.472 "adrfam": "ipv4", 00:22:51.472 "trsvcid": "4420", 00:22:51.472 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:51.472 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:51.472 "hdgst": false, 00:22:51.472 "ddgst": false 00:22:51.472 }, 00:22:51.472 "method": "bdev_nvme_attach_controller" 00:22:51.472 },{ 00:22:51.472 "params": { 00:22:51.472 "name": "Nvme5", 00:22:51.472 "trtype": "tcp", 00:22:51.472 "traddr": "10.0.0.2", 00:22:51.472 "adrfam": "ipv4", 00:22:51.472 "trsvcid": "4420", 00:22:51.472 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:51.472 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:51.472 "hdgst": false, 00:22:51.472 "ddgst": false 00:22:51.472 }, 00:22:51.472 "method": "bdev_nvme_attach_controller" 00:22:51.472 },{ 00:22:51.472 "params": { 00:22:51.472 "name": "Nvme6", 00:22:51.472 "trtype": "tcp", 00:22:51.472 "traddr": "10.0.0.2", 00:22:51.472 "adrfam": "ipv4", 00:22:51.472 "trsvcid": "4420", 00:22:51.472 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:51.472 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:51.472 "hdgst": false, 00:22:51.472 "ddgst": false 00:22:51.472 }, 00:22:51.472 "method": "bdev_nvme_attach_controller" 00:22:51.472 },{ 00:22:51.472 "params": { 00:22:51.472 "name": "Nvme7", 00:22:51.472 "trtype": "tcp", 00:22:51.472 "traddr": "10.0.0.2", 00:22:51.472 "adrfam": "ipv4", 00:22:51.472 "trsvcid": "4420", 00:22:51.472 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:51.472 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:51.472 "hdgst": false, 00:22:51.472 "ddgst": false 00:22:51.472 }, 00:22:51.472 "method": "bdev_nvme_attach_controller" 00:22:51.472 },{ 00:22:51.472 "params": { 00:22:51.472 "name": "Nvme8", 00:22:51.472 "trtype": "tcp", 00:22:51.472 "traddr": "10.0.0.2", 00:22:51.472 "adrfam": "ipv4", 00:22:51.472 "trsvcid": "4420", 00:22:51.472 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:51.472 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:51.472 "hdgst": false, 00:22:51.472 "ddgst": false 00:22:51.472 }, 00:22:51.472 "method": "bdev_nvme_attach_controller" 00:22:51.472 },{ 00:22:51.472 "params": { 00:22:51.472 "name": "Nvme9", 00:22:51.472 "trtype": "tcp", 00:22:51.472 "traddr": "10.0.0.2", 00:22:51.472 "adrfam": "ipv4", 00:22:51.472 "trsvcid": "4420", 00:22:51.472 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:51.472 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:51.472 "hdgst": false, 00:22:51.472 "ddgst": false 00:22:51.472 }, 00:22:51.472 "method": "bdev_nvme_attach_controller" 00:22:51.472 },{ 00:22:51.472 "params": { 00:22:51.472 "name": "Nvme10", 00:22:51.472 "trtype": "tcp", 00:22:51.472 "traddr": "10.0.0.2", 00:22:51.472 "adrfam": "ipv4", 00:22:51.472 "trsvcid": "4420", 00:22:51.472 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:51.472 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:51.472 "hdgst": false, 00:22:51.472 "ddgst": false 00:22:51.472 }, 00:22:51.472 "method": "bdev_nvme_attach_controller" 00:22:51.472 }' 00:22:51.734 [2024-11-20 07:21:13.750366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.734 [2024-11-20 07:21:13.804119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.119 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:53.119 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:53.119 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:53.119 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.119 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:53.119 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.119 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3583537 00:22:53.119 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:53.119 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:54.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3583537 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3583152 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.060 { 00:22:54.060 "params": { 00:22:54.060 "name": "Nvme$subsystem", 00:22:54.060 "trtype": "$TEST_TRANSPORT", 00:22:54.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.060 "adrfam": "ipv4", 00:22:54.060 "trsvcid": "$NVMF_PORT", 00:22:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.060 "hdgst": ${hdgst:-false}, 00:22:54.060 "ddgst": ${ddgst:-false} 00:22:54.060 }, 00:22:54.060 "method": "bdev_nvme_attach_controller" 00:22:54.060 } 00:22:54.060 EOF 00:22:54.060 )") 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.060 { 00:22:54.060 "params": { 00:22:54.060 "name": "Nvme$subsystem", 00:22:54.060 "trtype": "$TEST_TRANSPORT", 00:22:54.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.060 "adrfam": "ipv4", 00:22:54.060 "trsvcid": "$NVMF_PORT", 00:22:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.060 "hdgst": ${hdgst:-false}, 00:22:54.060 "ddgst": ${ddgst:-false} 00:22:54.060 }, 00:22:54.060 "method": "bdev_nvme_attach_controller" 00:22:54.060 } 00:22:54.060 EOF 00:22:54.060 )") 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.060 { 00:22:54.060 "params": { 00:22:54.060 "name": "Nvme$subsystem", 00:22:54.060 "trtype": "$TEST_TRANSPORT", 00:22:54.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.060 "adrfam": "ipv4", 00:22:54.060 "trsvcid": "$NVMF_PORT", 00:22:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.060 "hdgst": ${hdgst:-false}, 00:22:54.060 "ddgst": ${ddgst:-false} 00:22:54.060 }, 00:22:54.060 "method": "bdev_nvme_attach_controller" 00:22:54.060 } 00:22:54.060 EOF 00:22:54.060 )") 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.060 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.061 { 00:22:54.061 "params": { 00:22:54.061 "name": "Nvme$subsystem", 00:22:54.061 "trtype": "$TEST_TRANSPORT", 00:22:54.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.061 "adrfam": "ipv4", 00:22:54.061 "trsvcid": "$NVMF_PORT", 00:22:54.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.061 "hdgst": ${hdgst:-false}, 00:22:54.061 "ddgst": ${ddgst:-false} 00:22:54.061 }, 00:22:54.061 "method": "bdev_nvme_attach_controller" 00:22:54.061 } 00:22:54.061 EOF 00:22:54.061 )") 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.061 { 00:22:54.061 "params": { 00:22:54.061 "name": "Nvme$subsystem", 00:22:54.061 "trtype": "$TEST_TRANSPORT", 00:22:54.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.061 "adrfam": "ipv4", 00:22:54.061 "trsvcid": "$NVMF_PORT", 00:22:54.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.061 "hdgst": ${hdgst:-false}, 00:22:54.061 "ddgst": ${ddgst:-false} 00:22:54.061 }, 00:22:54.061 "method": "bdev_nvme_attach_controller" 00:22:54.061 } 00:22:54.061 EOF 00:22:54.061 )") 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.061 { 00:22:54.061 "params": { 00:22:54.061 "name": "Nvme$subsystem", 00:22:54.061 "trtype": "$TEST_TRANSPORT", 00:22:54.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.061 "adrfam": "ipv4", 00:22:54.061 "trsvcid": "$NVMF_PORT", 00:22:54.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.061 "hdgst": ${hdgst:-false}, 00:22:54.061 "ddgst": ${ddgst:-false} 00:22:54.061 }, 00:22:54.061 "method": "bdev_nvme_attach_controller" 00:22:54.061 } 00:22:54.061 EOF 00:22:54.061 )") 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.061 [2024-11-20 07:21:16.191961] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:22:54.061 [2024-11-20 07:21:16.192017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3583907 ] 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.061 { 00:22:54.061 "params": { 00:22:54.061 "name": "Nvme$subsystem", 00:22:54.061 "trtype": "$TEST_TRANSPORT", 00:22:54.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.061 "adrfam": "ipv4", 00:22:54.061 "trsvcid": "$NVMF_PORT", 00:22:54.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.061 "hdgst": ${hdgst:-false}, 00:22:54.061 "ddgst": ${ddgst:-false} 00:22:54.061 }, 00:22:54.061 "method": "bdev_nvme_attach_controller" 00:22:54.061 } 00:22:54.061 EOF 00:22:54.061 )") 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.061 { 00:22:54.061 "params": { 00:22:54.061 "name": "Nvme$subsystem", 00:22:54.061 "trtype": "$TEST_TRANSPORT", 00:22:54.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.061 "adrfam": "ipv4", 00:22:54.061 "trsvcid": "$NVMF_PORT", 00:22:54.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.061 "hdgst": ${hdgst:-false}, 00:22:54.061 "ddgst": ${ddgst:-false} 00:22:54.061 }, 00:22:54.061 "method": "bdev_nvme_attach_controller" 00:22:54.061 } 00:22:54.061 EOF 00:22:54.061 )") 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.061 { 00:22:54.061 "params": { 00:22:54.061 "name": "Nvme$subsystem", 00:22:54.061 "trtype": "$TEST_TRANSPORT", 00:22:54.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.061 "adrfam": "ipv4", 00:22:54.061 "trsvcid": "$NVMF_PORT", 00:22:54.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.061 "hdgst": ${hdgst:-false}, 00:22:54.061 "ddgst": ${ddgst:-false} 00:22:54.061 }, 00:22:54.061 "method": "bdev_nvme_attach_controller" 00:22:54.061 } 00:22:54.061 EOF 00:22:54.061 )") 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.061 { 00:22:54.061 "params": { 00:22:54.061 "name": "Nvme$subsystem", 00:22:54.061 "trtype": "$TEST_TRANSPORT", 00:22:54.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.061 "adrfam": "ipv4", 00:22:54.061 "trsvcid": "$NVMF_PORT", 00:22:54.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.061 "hdgst": ${hdgst:-false}, 00:22:54.061 "ddgst": ${ddgst:-false} 00:22:54.061 }, 00:22:54.061 "method": "bdev_nvme_attach_controller" 00:22:54.061 } 00:22:54.061 EOF 00:22:54.061 )") 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:54.061 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:54.061 "params": { 00:22:54.061 "name": "Nvme1", 00:22:54.061 "trtype": "tcp", 00:22:54.061 "traddr": "10.0.0.2", 00:22:54.061 "adrfam": "ipv4", 00:22:54.061 "trsvcid": "4420", 00:22:54.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.061 "hdgst": false, 00:22:54.061 "ddgst": false 00:22:54.061 }, 00:22:54.061 "method": "bdev_nvme_attach_controller" 00:22:54.061 },{ 00:22:54.061 "params": { 00:22:54.061 "name": "Nvme2", 00:22:54.061 "trtype": "tcp", 00:22:54.061 "traddr": "10.0.0.2", 00:22:54.061 "adrfam": "ipv4", 00:22:54.061 "trsvcid": "4420", 00:22:54.061 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:54.061 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:54.061 "hdgst": false, 00:22:54.061 "ddgst": false 00:22:54.061 }, 00:22:54.061 "method": "bdev_nvme_attach_controller" 00:22:54.061 },{ 00:22:54.061 "params": { 00:22:54.061 "name": "Nvme3", 00:22:54.061 "trtype": "tcp", 00:22:54.061 "traddr": "10.0.0.2", 00:22:54.061 "adrfam": "ipv4", 00:22:54.061 "trsvcid": "4420", 00:22:54.061 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:54.061 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:54.061 "hdgst": false, 00:22:54.061 "ddgst": false 00:22:54.061 }, 00:22:54.061 "method": "bdev_nvme_attach_controller" 00:22:54.061 },{ 00:22:54.061 "params": { 00:22:54.061 "name": "Nvme4", 00:22:54.061 "trtype": "tcp", 00:22:54.061 "traddr": "10.0.0.2", 00:22:54.061 "adrfam": "ipv4", 00:22:54.061 "trsvcid": "4420", 00:22:54.061 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:54.061 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:54.061 "hdgst": false, 00:22:54.061 "ddgst": false 00:22:54.061 }, 00:22:54.061 "method": "bdev_nvme_attach_controller" 00:22:54.061 },{ 00:22:54.061 "params": { 00:22:54.061 "name": "Nvme5", 00:22:54.061 "trtype": "tcp", 00:22:54.061 "traddr": "10.0.0.2", 00:22:54.061 "adrfam": "ipv4", 00:22:54.061 "trsvcid": "4420", 00:22:54.061 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:54.061 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:54.061 "hdgst": false, 00:22:54.061 "ddgst": false 00:22:54.061 }, 00:22:54.061 "method": "bdev_nvme_attach_controller" 00:22:54.061 },{ 00:22:54.061 "params": { 00:22:54.061 "name": "Nvme6", 00:22:54.061 "trtype": "tcp", 00:22:54.061 "traddr": "10.0.0.2", 00:22:54.061 "adrfam": "ipv4", 00:22:54.061 "trsvcid": "4420", 00:22:54.062 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:54.062 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:54.062 "hdgst": false, 00:22:54.062 "ddgst": false 00:22:54.062 }, 00:22:54.062 "method": "bdev_nvme_attach_controller" 00:22:54.062 },{ 00:22:54.062 "params": { 00:22:54.062 "name": "Nvme7", 00:22:54.062 "trtype": "tcp", 00:22:54.062 "traddr": "10.0.0.2", 00:22:54.062 "adrfam": "ipv4", 00:22:54.062 "trsvcid": "4420", 00:22:54.062 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:54.062 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:54.062 "hdgst": false, 00:22:54.062 "ddgst": false 00:22:54.062 }, 00:22:54.062 "method": "bdev_nvme_attach_controller" 00:22:54.062 },{ 00:22:54.062 "params": { 00:22:54.062 "name": "Nvme8", 00:22:54.062 "trtype": "tcp", 00:22:54.062 "traddr": "10.0.0.2", 00:22:54.062 "adrfam": "ipv4", 00:22:54.062 "trsvcid": "4420", 00:22:54.062 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:54.062 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:54.062 "hdgst": false, 00:22:54.062 "ddgst": false 00:22:54.062 }, 00:22:54.062 "method": "bdev_nvme_attach_controller" 00:22:54.062 },{ 00:22:54.062 "params": { 00:22:54.062 "name": "Nvme9", 00:22:54.062 "trtype": "tcp", 00:22:54.062 "traddr": "10.0.0.2", 00:22:54.062 "adrfam": "ipv4", 00:22:54.062 "trsvcid": "4420", 00:22:54.062 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:54.062 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:54.062 "hdgst": false, 00:22:54.062 "ddgst": false 00:22:54.062 }, 00:22:54.062 "method": "bdev_nvme_attach_controller" 00:22:54.062 },{ 00:22:54.062 "params": { 00:22:54.062 "name": "Nvme10", 00:22:54.062 "trtype": "tcp", 00:22:54.062 "traddr": "10.0.0.2", 00:22:54.062 "adrfam": "ipv4", 00:22:54.062 "trsvcid": "4420", 00:22:54.062 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:54.062 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:54.062 "hdgst": false, 00:22:54.062 "ddgst": false 00:22:54.062 }, 00:22:54.062 "method": "bdev_nvme_attach_controller" 00:22:54.062 }' 00:22:54.062 [2024-11-20 07:21:16.281568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.062 [2024-11-20 07:21:16.317389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.444 Running I/O for 1 seconds... 00:22:56.725 1857.00 IOPS, 116.06 MiB/s 00:22:56.725 Latency(us) 00:22:56.725 [2024-11-20T06:21:19.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.725 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.725 Verification LBA range: start 0x0 length 0x400 00:22:56.725 Nvme1n1 : 1.16 221.00 13.81 0.00 0.00 286746.24 18896.21 251658.24 00:22:56.725 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.725 Verification LBA range: start 0x0 length 0x400 00:22:56.725 Nvme2n1 : 1.18 216.84 13.55 0.00 0.00 287439.79 21080.75 283115.52 00:22:56.725 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.725 Verification LBA range: start 0x0 length 0x400 00:22:56.725 Nvme3n1 : 1.07 238.27 14.89 0.00 0.00 256188.16 18459.31 255153.49 00:22:56.725 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.725 Verification LBA range: start 0x0 length 0x400 00:22:56.725 Nvme4n1 : 1.10 235.26 14.70 0.00 0.00 248597.14 21299.20 251658.24 00:22:56.725 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.725 Verification LBA range: start 0x0 length 0x400 00:22:56.725 Nvme5n1 : 1.20 267.19 16.70 0.00 0.00 221683.20 17585.49 223696.21 00:22:56.725 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.725 Verification LBA range: start 0x0 length 0x400 00:22:56.725 Nvme6n1 : 1.19 214.42 13.40 0.00 0.00 270379.52 34515.63 249910.61 00:22:56.725 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.725 Verification LBA range: start 0x0 length 0x400 00:22:56.725 Nvme7n1 : 1.20 266.36 16.65 0.00 0.00 214312.02 10376.53 249910.61 00:22:56.725 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.725 Verification LBA range: start 0x0 length 0x400 00:22:56.725 Nvme8n1 : 1.20 265.66 16.60 0.00 0.00 211555.67 15728.64 253405.87 00:22:56.725 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.725 Verification LBA range: start 0x0 length 0x400 00:22:56.725 Nvme9n1 : 1.19 214.61 13.41 0.00 0.00 256796.37 18786.99 274377.39 00:22:56.725 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.725 Verification LBA range: start 0x0 length 0x400 00:22:56.725 Nvme10n1 : 1.21 265.14 16.57 0.00 0.00 203439.45 11960.32 260396.37 00:22:56.725 [2024-11-20T06:21:19.003Z] =================================================================================================================== 00:22:56.725 [2024-11-20T06:21:19.003Z] Total : 2404.76 150.30 0.00 0.00 242721.00 10376.53 283115.52 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:56.725 rmmod nvme_tcp 00:22:56.725 rmmod nvme_fabrics 00:22:56.725 rmmod nvme_keyring 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3583152 ']' 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3583152 00:22:56.725 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 3583152 ']' 00:22:56.985 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 3583152 00:22:56.985 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:22:56.985 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:56.985 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3583152 00:22:56.985 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:56.985 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:56.985 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3583152' 00:22:56.985 killing process with pid 3583152 00:22:56.985 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 3583152 00:22:56.985 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 3583152 00:22:57.246 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.246 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.246 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.246 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:57.246 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:57.246 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.246 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.246 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.246 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:57.246 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.246 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.246 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.159 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.159 00:22:59.159 real 0m16.916s 00:22:59.159 user 0m34.091s 00:22:59.159 sys 0m6.960s 00:22:59.159 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:59.159 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.159 ************************************ 00:22:59.159 END TEST nvmf_shutdown_tc1 00:22:59.159 ************************************ 00:22:59.160 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:59.160 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:59.160 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:59.160 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:59.421 ************************************ 00:22:59.421 START TEST nvmf_shutdown_tc2 00:22:59.421 ************************************ 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:59.421 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.421 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:59.422 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:59.422 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:59.422 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.422 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:22:59.683 00:22:59.683 --- 10.0.0.2 ping statistics --- 00:22:59.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.683 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:22:59.683 00:22:59.683 --- 10.0.0.1 ping statistics --- 00:22:59.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.683 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3585135 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3585135 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3585135 ']' 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:59.683 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:59.683 [2024-11-20 07:21:21.888832] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:22:59.683 [2024-11-20 07:21:21.888883] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.944 [2024-11-20 07:21:21.975599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:59.944 [2024-11-20 07:21:22.006263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.944 [2024-11-20 07:21:22.006290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.944 [2024-11-20 07:21:22.006296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.944 [2024-11-20 07:21:22.006301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.944 [2024-11-20 07:21:22.006305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.944 [2024-11-20 07:21:22.007759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.944 [2024-11-20 07:21:22.007915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.944 [2024-11-20 07:21:22.008061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.944 [2024-11-20 07:21:22.008063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.516 [2024-11-20 07:21:22.739915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.516 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.776 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.776 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.777 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.777 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.777 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:00.777 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.777 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.777 Malloc1 00:23:00.777 [2024-11-20 07:21:22.846867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.777 Malloc2 00:23:00.777 Malloc3 00:23:00.777 Malloc4 00:23:00.777 Malloc5 00:23:00.777 Malloc6 00:23:01.038 Malloc7 00:23:01.038 Malloc8 00:23:01.038 Malloc9 00:23:01.038 Malloc10 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3585412 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3585412 /var/tmp/bdevperf.sock 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3585412 ']' 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.038 { 00:23:01.038 "params": { 00:23:01.038 "name": "Nvme$subsystem", 00:23:01.038 "trtype": "$TEST_TRANSPORT", 00:23:01.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.038 "adrfam": "ipv4", 00:23:01.038 "trsvcid": "$NVMF_PORT", 00:23:01.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.038 "hdgst": ${hdgst:-false}, 00:23:01.038 "ddgst": ${ddgst:-false} 00:23:01.038 }, 00:23:01.038 "method": "bdev_nvme_attach_controller" 00:23:01.038 } 00:23:01.038 EOF 00:23:01.038 )") 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.038 { 00:23:01.038 "params": { 00:23:01.038 "name": "Nvme$subsystem", 00:23:01.038 "trtype": "$TEST_TRANSPORT", 00:23:01.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.038 "adrfam": "ipv4", 00:23:01.038 "trsvcid": "$NVMF_PORT", 00:23:01.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.038 "hdgst": ${hdgst:-false}, 00:23:01.038 "ddgst": ${ddgst:-false} 00:23:01.038 }, 00:23:01.038 "method": "bdev_nvme_attach_controller" 00:23:01.038 } 00:23:01.038 EOF 00:23:01.038 )") 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.038 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.038 { 00:23:01.038 "params": { 00:23:01.038 "name": "Nvme$subsystem", 00:23:01.038 "trtype": "$TEST_TRANSPORT", 00:23:01.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.038 "adrfam": "ipv4", 00:23:01.038 "trsvcid": "$NVMF_PORT", 00:23:01.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.038 "hdgst": ${hdgst:-false}, 00:23:01.039 "ddgst": ${ddgst:-false} 00:23:01.039 }, 00:23:01.039 "method": "bdev_nvme_attach_controller" 00:23:01.039 } 00:23:01.039 EOF 00:23:01.039 )") 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.039 { 00:23:01.039 "params": { 00:23:01.039 "name": "Nvme$subsystem", 00:23:01.039 "trtype": "$TEST_TRANSPORT", 00:23:01.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.039 "adrfam": "ipv4", 00:23:01.039 "trsvcid": "$NVMF_PORT", 00:23:01.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.039 "hdgst": ${hdgst:-false}, 00:23:01.039 "ddgst": ${ddgst:-false} 00:23:01.039 }, 00:23:01.039 "method": "bdev_nvme_attach_controller" 00:23:01.039 } 00:23:01.039 EOF 00:23:01.039 )") 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.039 { 00:23:01.039 "params": { 00:23:01.039 "name": "Nvme$subsystem", 00:23:01.039 "trtype": "$TEST_TRANSPORT", 00:23:01.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.039 "adrfam": "ipv4", 00:23:01.039 "trsvcid": "$NVMF_PORT", 00:23:01.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.039 "hdgst": ${hdgst:-false}, 00:23:01.039 "ddgst": ${ddgst:-false} 00:23:01.039 }, 00:23:01.039 "method": "bdev_nvme_attach_controller" 00:23:01.039 } 00:23:01.039 EOF 00:23:01.039 )") 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.039 { 00:23:01.039 "params": { 00:23:01.039 "name": "Nvme$subsystem", 00:23:01.039 "trtype": "$TEST_TRANSPORT", 00:23:01.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.039 "adrfam": "ipv4", 00:23:01.039 "trsvcid": "$NVMF_PORT", 00:23:01.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.039 "hdgst": ${hdgst:-false}, 00:23:01.039 "ddgst": ${ddgst:-false} 00:23:01.039 }, 00:23:01.039 "method": "bdev_nvme_attach_controller" 00:23:01.039 } 00:23:01.039 EOF 00:23:01.039 )") 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.039 [2024-11-20 07:21:23.288704] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:23:01.039 [2024-11-20 07:21:23.288758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585412 ] 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.039 { 00:23:01.039 "params": { 00:23:01.039 "name": "Nvme$subsystem", 00:23:01.039 "trtype": "$TEST_TRANSPORT", 00:23:01.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.039 "adrfam": "ipv4", 00:23:01.039 "trsvcid": "$NVMF_PORT", 00:23:01.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.039 "hdgst": ${hdgst:-false}, 00:23:01.039 "ddgst": ${ddgst:-false} 00:23:01.039 }, 00:23:01.039 "method": "bdev_nvme_attach_controller" 00:23:01.039 } 00:23:01.039 EOF 00:23:01.039 )") 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.039 { 00:23:01.039 "params": { 00:23:01.039 "name": "Nvme$subsystem", 00:23:01.039 "trtype": "$TEST_TRANSPORT", 00:23:01.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.039 "adrfam": "ipv4", 00:23:01.039 "trsvcid": "$NVMF_PORT", 00:23:01.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.039 "hdgst": ${hdgst:-false}, 00:23:01.039 "ddgst": ${ddgst:-false} 00:23:01.039 }, 00:23:01.039 "method": "bdev_nvme_attach_controller" 00:23:01.039 } 00:23:01.039 EOF 00:23:01.039 )") 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.039 { 00:23:01.039 "params": { 00:23:01.039 "name": "Nvme$subsystem", 00:23:01.039 "trtype": "$TEST_TRANSPORT", 00:23:01.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.039 "adrfam": "ipv4", 00:23:01.039 "trsvcid": "$NVMF_PORT", 00:23:01.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.039 "hdgst": ${hdgst:-false}, 00:23:01.039 "ddgst": ${ddgst:-false} 00:23:01.039 }, 00:23:01.039 "method": "bdev_nvme_attach_controller" 00:23:01.039 } 00:23:01.039 EOF 00:23:01.039 )") 00:23:01.039 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.300 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.300 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.300 { 00:23:01.300 "params": { 00:23:01.300 "name": "Nvme$subsystem", 00:23:01.300 "trtype": "$TEST_TRANSPORT", 00:23:01.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.300 "adrfam": "ipv4", 00:23:01.300 "trsvcid": "$NVMF_PORT", 00:23:01.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.300 "hdgst": ${hdgst:-false}, 00:23:01.300 "ddgst": ${ddgst:-false} 00:23:01.300 }, 00:23:01.300 "method": "bdev_nvme_attach_controller" 00:23:01.300 } 00:23:01.300 EOF 00:23:01.300 )") 00:23:01.300 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.300 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:01.300 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:01.300 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:01.300 "params": { 00:23:01.300 "name": "Nvme1", 00:23:01.300 "trtype": "tcp", 00:23:01.300 "traddr": "10.0.0.2", 00:23:01.300 "adrfam": "ipv4", 00:23:01.300 "trsvcid": "4420", 00:23:01.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.300 "hdgst": false, 00:23:01.300 "ddgst": false 00:23:01.300 }, 00:23:01.300 "method": "bdev_nvme_attach_controller" 00:23:01.300 },{ 00:23:01.300 "params": { 00:23:01.300 "name": "Nvme2", 00:23:01.300 "trtype": "tcp", 00:23:01.300 "traddr": "10.0.0.2", 00:23:01.300 "adrfam": "ipv4", 00:23:01.300 "trsvcid": "4420", 00:23:01.300 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.300 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.300 "hdgst": false, 00:23:01.300 "ddgst": false 00:23:01.300 }, 00:23:01.300 "method": "bdev_nvme_attach_controller" 00:23:01.300 },{ 00:23:01.300 "params": { 00:23:01.300 "name": "Nvme3", 00:23:01.300 "trtype": "tcp", 00:23:01.300 "traddr": "10.0.0.2", 00:23:01.300 "adrfam": "ipv4", 00:23:01.300 "trsvcid": "4420", 00:23:01.300 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:01.300 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:01.300 "hdgst": false, 00:23:01.300 "ddgst": false 00:23:01.300 }, 00:23:01.300 "method": "bdev_nvme_attach_controller" 00:23:01.300 },{ 00:23:01.300 "params": { 00:23:01.300 "name": "Nvme4", 00:23:01.300 "trtype": "tcp", 00:23:01.300 "traddr": "10.0.0.2", 00:23:01.300 "adrfam": "ipv4", 00:23:01.300 "trsvcid": "4420", 00:23:01.300 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:01.300 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:01.300 "hdgst": false, 00:23:01.300 "ddgst": false 00:23:01.300 }, 00:23:01.300 "method": "bdev_nvme_attach_controller" 00:23:01.300 },{ 00:23:01.300 "params": { 00:23:01.300 "name": "Nvme5", 00:23:01.300 "trtype": "tcp", 00:23:01.300 "traddr": "10.0.0.2", 00:23:01.300 "adrfam": "ipv4", 00:23:01.300 "trsvcid": "4420", 00:23:01.300 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:01.300 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:01.300 "hdgst": false, 00:23:01.300 "ddgst": false 00:23:01.300 }, 00:23:01.300 "method": "bdev_nvme_attach_controller" 00:23:01.300 },{ 00:23:01.300 "params": { 00:23:01.301 "name": "Nvme6", 00:23:01.301 "trtype": "tcp", 00:23:01.301 "traddr": "10.0.0.2", 00:23:01.301 "adrfam": "ipv4", 00:23:01.301 "trsvcid": "4420", 00:23:01.301 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:01.301 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:01.301 "hdgst": false, 00:23:01.301 "ddgst": false 00:23:01.301 }, 00:23:01.301 "method": "bdev_nvme_attach_controller" 00:23:01.301 },{ 00:23:01.301 "params": { 00:23:01.301 "name": "Nvme7", 00:23:01.301 "trtype": "tcp", 00:23:01.301 "traddr": "10.0.0.2", 00:23:01.301 "adrfam": "ipv4", 00:23:01.301 "trsvcid": "4420", 00:23:01.301 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:01.301 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:01.301 "hdgst": false, 00:23:01.301 "ddgst": false 00:23:01.301 }, 00:23:01.301 "method": "bdev_nvme_attach_controller" 00:23:01.301 },{ 00:23:01.301 "params": { 00:23:01.301 "name": "Nvme8", 00:23:01.301 "trtype": "tcp", 00:23:01.301 "traddr": "10.0.0.2", 00:23:01.301 "adrfam": "ipv4", 00:23:01.301 "trsvcid": "4420", 00:23:01.301 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:01.301 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:01.301 "hdgst": false, 00:23:01.301 "ddgst": false 00:23:01.301 }, 00:23:01.301 "method": "bdev_nvme_attach_controller" 00:23:01.301 },{ 00:23:01.301 "params": { 00:23:01.301 "name": "Nvme9", 00:23:01.301 "trtype": "tcp", 00:23:01.301 "traddr": "10.0.0.2", 00:23:01.301 "adrfam": "ipv4", 00:23:01.301 "trsvcid": "4420", 00:23:01.301 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:01.301 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:01.301 "hdgst": false, 00:23:01.301 "ddgst": false 00:23:01.301 }, 00:23:01.301 "method": "bdev_nvme_attach_controller" 00:23:01.301 },{ 00:23:01.301 "params": { 00:23:01.301 "name": "Nvme10", 00:23:01.301 "trtype": "tcp", 00:23:01.301 "traddr": "10.0.0.2", 00:23:01.301 "adrfam": "ipv4", 00:23:01.301 "trsvcid": "4420", 00:23:01.301 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:01.301 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:01.301 "hdgst": false, 00:23:01.301 "ddgst": false 00:23:01.301 }, 00:23:01.301 "method": "bdev_nvme_attach_controller" 00:23:01.301 }' 00:23:01.301 [2024-11-20 07:21:23.377871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.301 [2024-11-20 07:21:23.414392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.684 Running I/O for 10 seconds... 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:02.685 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.944 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:02.945 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:02.945 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:03.206 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:03.206 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:03.206 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.206 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.206 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.206 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.206 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.206 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:03.206 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:03.206 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=137 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 137 -ge 100 ']' 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3585412 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3585412 ']' 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3585412 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3585412 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3585412' 00:23:03.467 killing process with pid 3585412 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3585412 00:23:03.467 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3585412 00:23:03.467 Received shutdown signal, test time was about 0.967576 seconds 00:23:03.467 00:23:03.467 Latency(us) 00:23:03.467 [2024-11-20T06:21:25.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.467 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.467 Verification LBA range: start 0x0 length 0x400 00:23:03.467 Nvme1n1 : 0.96 267.78 16.74 0.00 0.00 236061.65 18240.85 244667.73 00:23:03.467 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.467 Verification LBA range: start 0x0 length 0x400 00:23:03.467 Nvme2n1 : 0.96 266.81 16.68 0.00 0.00 232264.53 20097.71 253405.87 00:23:03.467 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.467 Verification LBA range: start 0x0 length 0x400 00:23:03.467 Nvme3n1 : 0.95 269.17 16.82 0.00 0.00 225397.12 14964.05 246415.36 00:23:03.467 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.467 Verification LBA range: start 0x0 length 0x400 00:23:03.467 Nvme4n1 : 0.95 272.84 17.05 0.00 0.00 217010.49 4478.29 234181.97 00:23:03.467 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.467 Verification LBA range: start 0x0 length 0x400 00:23:03.467 Nvme5n1 : 0.93 206.74 12.92 0.00 0.00 280637.72 17913.17 244667.73 00:23:03.467 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.467 Verification LBA range: start 0x0 length 0x400 00:23:03.467 Nvme6n1 : 0.97 264.82 16.55 0.00 0.00 214633.60 19442.35 248162.99 00:23:03.467 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.467 Verification LBA range: start 0x0 length 0x400 00:23:03.467 Nvme7n1 : 0.96 265.99 16.62 0.00 0.00 209340.37 18896.21 246415.36 00:23:03.467 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.467 Verification LBA range: start 0x0 length 0x400 00:23:03.467 Nvme8n1 : 0.93 211.27 13.20 0.00 0.00 254036.35 5789.01 241172.48 00:23:03.467 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.467 Verification LBA range: start 0x0 length 0x400 00:23:03.467 Nvme9n1 : 0.95 203.12 12.69 0.00 0.00 261057.71 18350.08 270882.13 00:23:03.467 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.467 Verification LBA range: start 0x0 length 0x400 00:23:03.467 Nvme10n1 : 0.94 203.93 12.75 0.00 0.00 253591.32 19442.35 248162.99 00:23:03.467 [2024-11-20T06:21:25.745Z] =================================================================================================================== 00:23:03.467 [2024-11-20T06:21:25.746Z] Total : 2432.47 152.03 0.00 0.00 235751.60 4478.29 270882.13 00:23:03.729 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3585135 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.672 rmmod nvme_tcp 00:23:04.672 rmmod nvme_fabrics 00:23:04.672 rmmod nvme_keyring 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3585135 ']' 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3585135 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3585135 ']' 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3585135 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:04.672 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3585135 00:23:04.933 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:04.933 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:04.933 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3585135' 00:23:04.933 killing process with pid 3585135 00:23:04.933 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3585135 00:23:04.933 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3585135 00:23:05.194 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:05.194 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:05.194 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:05.194 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:05.194 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:05.194 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:05.194 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:05.194 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.194 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:05.194 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.194 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.194 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:07.129 00:23:07.129 real 0m7.841s 00:23:07.129 user 0m23.645s 00:23:07.129 sys 0m1.262s 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.129 ************************************ 00:23:07.129 END TEST nvmf_shutdown_tc2 00:23:07.129 ************************************ 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.129 ************************************ 00:23:07.129 START TEST nvmf_shutdown_tc3 00:23:07.129 ************************************ 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.129 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.130 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.130 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.130 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.130 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.130 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.130 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:07.396 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.396 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:07.397 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:07.397 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:07.397 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.397 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:07.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:23:07.658 00:23:07.658 --- 10.0.0.2 ping statistics --- 00:23:07.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.658 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:23:07.658 00:23:07.658 --- 10.0.0.1 ping statistics --- 00:23:07.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.658 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3586867 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3586867 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3586867 ']' 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:07.658 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.658 [2024-11-20 07:21:29.824014] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:23:07.658 [2024-11-20 07:21:29.824079] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.658 [2024-11-20 07:21:29.919803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.918 [2024-11-20 07:21:29.953965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.919 [2024-11-20 07:21:29.953995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.919 [2024-11-20 07:21:29.954001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.919 [2024-11-20 07:21:29.954006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.919 [2024-11-20 07:21:29.954010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.919 [2024-11-20 07:21:29.955494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.919 [2024-11-20 07:21:29.955647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.919 [2024-11-20 07:21:29.955797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.919 [2024-11-20 07:21:29.955799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.490 [2024-11-20 07:21:30.666911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.490 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.490 Malloc1 00:23:08.750 [2024-11-20 07:21:30.782761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.750 Malloc2 00:23:08.750 Malloc3 00:23:08.750 Malloc4 00:23:08.750 Malloc5 00:23:08.750 Malloc6 00:23:08.750 Malloc7 00:23:09.013 Malloc8 00:23:09.013 Malloc9 00:23:09.013 Malloc10 00:23:09.013 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.013 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:09.013 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.013 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.013 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3587254 00:23:09.013 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3587254 /var/tmp/bdevperf.sock 00:23:09.013 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3587254 ']' 00:23:09.013 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.013 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:09.013 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.013 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:09.013 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:09.013 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:09.014 { 00:23:09.014 "params": { 00:23:09.014 "name": "Nvme$subsystem", 00:23:09.014 "trtype": "$TEST_TRANSPORT", 00:23:09.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.014 "adrfam": "ipv4", 00:23:09.014 "trsvcid": "$NVMF_PORT", 00:23:09.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.014 "hdgst": ${hdgst:-false}, 00:23:09.014 "ddgst": ${ddgst:-false} 00:23:09.014 }, 00:23:09.014 "method": "bdev_nvme_attach_controller" 00:23:09.014 } 00:23:09.014 EOF 00:23:09.014 )") 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:09.014 { 00:23:09.014 "params": { 00:23:09.014 "name": "Nvme$subsystem", 00:23:09.014 "trtype": "$TEST_TRANSPORT", 00:23:09.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.014 "adrfam": "ipv4", 00:23:09.014 "trsvcid": "$NVMF_PORT", 00:23:09.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.014 "hdgst": ${hdgst:-false}, 00:23:09.014 "ddgst": ${ddgst:-false} 00:23:09.014 }, 00:23:09.014 "method": "bdev_nvme_attach_controller" 00:23:09.014 } 00:23:09.014 EOF 00:23:09.014 )") 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:09.014 { 00:23:09.014 "params": { 00:23:09.014 "name": "Nvme$subsystem", 00:23:09.014 "trtype": "$TEST_TRANSPORT", 00:23:09.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.014 "adrfam": "ipv4", 00:23:09.014 "trsvcid": "$NVMF_PORT", 00:23:09.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.014 "hdgst": ${hdgst:-false}, 00:23:09.014 "ddgst": ${ddgst:-false} 00:23:09.014 }, 00:23:09.014 "method": "bdev_nvme_attach_controller" 00:23:09.014 } 00:23:09.014 EOF 00:23:09.014 )") 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:09.014 { 00:23:09.014 "params": { 00:23:09.014 "name": "Nvme$subsystem", 00:23:09.014 "trtype": "$TEST_TRANSPORT", 00:23:09.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.014 "adrfam": "ipv4", 00:23:09.014 "trsvcid": "$NVMF_PORT", 00:23:09.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.014 "hdgst": ${hdgst:-false}, 00:23:09.014 "ddgst": ${ddgst:-false} 00:23:09.014 }, 00:23:09.014 "method": "bdev_nvme_attach_controller" 00:23:09.014 } 00:23:09.014 EOF 00:23:09.014 )") 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:09.014 { 00:23:09.014 "params": { 00:23:09.014 "name": "Nvme$subsystem", 00:23:09.014 "trtype": "$TEST_TRANSPORT", 00:23:09.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.014 "adrfam": "ipv4", 00:23:09.014 "trsvcid": "$NVMF_PORT", 00:23:09.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.014 "hdgst": ${hdgst:-false}, 00:23:09.014 "ddgst": ${ddgst:-false} 00:23:09.014 }, 00:23:09.014 "method": "bdev_nvme_attach_controller" 00:23:09.014 } 00:23:09.014 EOF 00:23:09.014 )") 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:09.014 { 00:23:09.014 "params": { 00:23:09.014 "name": "Nvme$subsystem", 00:23:09.014 "trtype": "$TEST_TRANSPORT", 00:23:09.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.014 "adrfam": "ipv4", 00:23:09.014 "trsvcid": "$NVMF_PORT", 00:23:09.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.014 "hdgst": ${hdgst:-false}, 00:23:09.014 "ddgst": ${ddgst:-false} 00:23:09.014 }, 00:23:09.014 "method": "bdev_nvme_attach_controller" 00:23:09.014 } 00:23:09.014 EOF 00:23:09.014 )") 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:09.014 [2024-11-20 07:21:31.231672] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:23:09.014 [2024-11-20 07:21:31.231726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3587254 ] 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:09.014 { 00:23:09.014 "params": { 00:23:09.014 "name": "Nvme$subsystem", 00:23:09.014 "trtype": "$TEST_TRANSPORT", 00:23:09.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.014 "adrfam": "ipv4", 00:23:09.014 "trsvcid": "$NVMF_PORT", 00:23:09.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.014 "hdgst": ${hdgst:-false}, 00:23:09.014 "ddgst": ${ddgst:-false} 00:23:09.014 }, 00:23:09.014 "method": "bdev_nvme_attach_controller" 00:23:09.014 } 00:23:09.014 EOF 00:23:09.014 )") 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:09.014 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:09.014 { 00:23:09.014 "params": { 00:23:09.014 "name": "Nvme$subsystem", 00:23:09.014 "trtype": "$TEST_TRANSPORT", 00:23:09.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.014 "adrfam": "ipv4", 00:23:09.014 "trsvcid": "$NVMF_PORT", 00:23:09.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.015 "hdgst": ${hdgst:-false}, 00:23:09.015 "ddgst": ${ddgst:-false} 00:23:09.015 }, 00:23:09.015 "method": "bdev_nvme_attach_controller" 00:23:09.015 } 00:23:09.015 EOF 00:23:09.015 )") 00:23:09.015 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:09.015 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:09.015 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:09.015 { 00:23:09.015 "params": { 00:23:09.015 "name": "Nvme$subsystem", 00:23:09.015 "trtype": "$TEST_TRANSPORT", 00:23:09.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.015 "adrfam": "ipv4", 00:23:09.015 "trsvcid": "$NVMF_PORT", 00:23:09.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.015 "hdgst": ${hdgst:-false}, 00:23:09.015 "ddgst": ${ddgst:-false} 00:23:09.015 }, 00:23:09.015 "method": "bdev_nvme_attach_controller" 00:23:09.015 } 00:23:09.015 EOF 00:23:09.015 )") 00:23:09.015 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:09.015 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:09.015 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:09.015 { 00:23:09.015 "params": { 00:23:09.015 "name": "Nvme$subsystem", 00:23:09.015 "trtype": "$TEST_TRANSPORT", 00:23:09.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.015 "adrfam": "ipv4", 00:23:09.015 "trsvcid": "$NVMF_PORT", 00:23:09.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.015 "hdgst": ${hdgst:-false}, 00:23:09.015 "ddgst": ${ddgst:-false} 00:23:09.015 }, 00:23:09.015 "method": "bdev_nvme_attach_controller" 00:23:09.015 } 00:23:09.015 EOF 00:23:09.015 )") 00:23:09.015 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:09.015 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:09.015 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:09.015 07:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:09.015 "params": { 00:23:09.015 "name": "Nvme1", 00:23:09.015 "trtype": "tcp", 00:23:09.015 "traddr": "10.0.0.2", 00:23:09.015 "adrfam": "ipv4", 00:23:09.015 "trsvcid": "4420", 00:23:09.015 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.015 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.015 "hdgst": false, 00:23:09.015 "ddgst": false 00:23:09.015 }, 00:23:09.015 "method": "bdev_nvme_attach_controller" 00:23:09.015 },{ 00:23:09.015 "params": { 00:23:09.015 "name": "Nvme2", 00:23:09.015 "trtype": "tcp", 00:23:09.015 "traddr": "10.0.0.2", 00:23:09.015 "adrfam": "ipv4", 00:23:09.015 "trsvcid": "4420", 00:23:09.015 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:09.015 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:09.015 "hdgst": false, 00:23:09.015 "ddgst": false 00:23:09.015 }, 00:23:09.015 "method": "bdev_nvme_attach_controller" 00:23:09.015 },{ 00:23:09.015 "params": { 00:23:09.015 "name": "Nvme3", 00:23:09.015 "trtype": "tcp", 00:23:09.015 "traddr": "10.0.0.2", 00:23:09.015 "adrfam": "ipv4", 00:23:09.015 "trsvcid": "4420", 00:23:09.015 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:09.015 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:09.015 "hdgst": false, 00:23:09.015 "ddgst": false 00:23:09.015 }, 00:23:09.015 "method": "bdev_nvme_attach_controller" 00:23:09.015 },{ 00:23:09.015 "params": { 00:23:09.015 "name": "Nvme4", 00:23:09.015 "trtype": "tcp", 00:23:09.015 "traddr": "10.0.0.2", 00:23:09.015 "adrfam": "ipv4", 00:23:09.015 "trsvcid": "4420", 00:23:09.015 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:09.015 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:09.015 "hdgst": false, 00:23:09.015 "ddgst": false 00:23:09.015 }, 00:23:09.015 "method": "bdev_nvme_attach_controller" 00:23:09.015 },{ 00:23:09.015 "params": { 00:23:09.015 "name": "Nvme5", 00:23:09.015 "trtype": "tcp", 00:23:09.015 "traddr": "10.0.0.2", 00:23:09.015 "adrfam": "ipv4", 00:23:09.015 "trsvcid": "4420", 00:23:09.015 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:09.015 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:09.015 "hdgst": false, 00:23:09.015 "ddgst": false 00:23:09.015 }, 00:23:09.015 "method": "bdev_nvme_attach_controller" 00:23:09.015 },{ 00:23:09.015 "params": { 00:23:09.015 "name": "Nvme6", 00:23:09.015 "trtype": "tcp", 00:23:09.015 "traddr": "10.0.0.2", 00:23:09.015 "adrfam": "ipv4", 00:23:09.015 "trsvcid": "4420", 00:23:09.015 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:09.015 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:09.015 "hdgst": false, 00:23:09.015 "ddgst": false 00:23:09.015 }, 00:23:09.015 "method": "bdev_nvme_attach_controller" 00:23:09.015 },{ 00:23:09.015 "params": { 00:23:09.015 "name": "Nvme7", 00:23:09.015 "trtype": "tcp", 00:23:09.015 "traddr": "10.0.0.2", 00:23:09.015 "adrfam": "ipv4", 00:23:09.015 "trsvcid": "4420", 00:23:09.015 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:09.015 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:09.015 "hdgst": false, 00:23:09.015 "ddgst": false 00:23:09.015 }, 00:23:09.015 "method": "bdev_nvme_attach_controller" 00:23:09.015 },{ 00:23:09.015 "params": { 00:23:09.015 "name": "Nvme8", 00:23:09.015 "trtype": "tcp", 00:23:09.015 "traddr": "10.0.0.2", 00:23:09.015 "adrfam": "ipv4", 00:23:09.015 "trsvcid": "4420", 00:23:09.015 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:09.015 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:09.015 "hdgst": false, 00:23:09.015 "ddgst": false 00:23:09.015 }, 00:23:09.015 "method": "bdev_nvme_attach_controller" 00:23:09.015 },{ 00:23:09.015 "params": { 00:23:09.015 "name": "Nvme9", 00:23:09.015 "trtype": "tcp", 00:23:09.015 "traddr": "10.0.0.2", 00:23:09.015 "adrfam": "ipv4", 00:23:09.015 "trsvcid": "4420", 00:23:09.015 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:09.015 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:09.015 "hdgst": false, 00:23:09.015 "ddgst": false 00:23:09.015 }, 00:23:09.015 "method": "bdev_nvme_attach_controller" 00:23:09.015 },{ 00:23:09.015 "params": { 00:23:09.015 "name": "Nvme10", 00:23:09.015 "trtype": "tcp", 00:23:09.015 "traddr": "10.0.0.2", 00:23:09.015 "adrfam": "ipv4", 00:23:09.015 "trsvcid": "4420", 00:23:09.015 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:09.015 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:09.015 "hdgst": false, 00:23:09.015 "ddgst": false 00:23:09.015 }, 00:23:09.015 "method": "bdev_nvme_attach_controller" 00:23:09.015 }' 00:23:09.276 [2024-11-20 07:21:31.322559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.276 [2024-11-20 07:21:31.358888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.664 Running I/O for 10 seconds... 00:23:10.664 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:10.664 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:23:10.664 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:10.664 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.664 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:10.924 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:11.185 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:11.185 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:11.185 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:11.185 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:11.185 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.185 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:11.185 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.185 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:11.185 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:11.185 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3586867 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3586867 ']' 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3586867 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3586867 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3586867' 00:23:11.457 killing process with pid 3586867 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 3586867 00:23:11.457 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 3586867 00:23:11.457 [2024-11-20 07:21:33.673029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.457 [2024-11-20 07:21:33.673315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.673386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c393b0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.674920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3bf80 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.676035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.676051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.676058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.676064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.676072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.458 [2024-11-20 07:21:33.676077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.676371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c398a0 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.678428] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.459 [2024-11-20 07:21:33.679182] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.459 [2024-11-20 07:21:33.679894] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.459 [2024-11-20 07:21:33.682398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.459 [2024-11-20 07:21:33.682452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.459 [2024-11-20 07:21:33.682467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with [2024-11-20 07:21:33.682473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:23:11.459 id:0 cdw10:00000000 cdw11:00000000 00:23:11.459 [2024-11-20 07:21:33.682480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.459 [2024-11-20 07:21:33.682486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.459 [2024-11-20 07:21:33.682496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.459 [2024-11-20 07:21:33.682515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.459 [2024-11-20 07:21:33.682519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.459 [2024-11-20 07:21:33.682521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.460 [2024-11-20 07:21:33.682532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaccb0 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.460 [2024-11-20 07:21:33.682588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with [2024-11-20 07:21:33.682593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:23:11.460 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.460 [2024-11-20 07:21:33.682601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.460 [2024-11-20 07:21:33.682607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 07:21:33.682618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.460 the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.460 [2024-11-20 07:21:33.682632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 07:21:33.682637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.460 the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.460 [2024-11-20 07:21:33.682650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 07:21:33.682656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.460 the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb00 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with [2024-11-20 07:21:33.682691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:23:11.460 id:0 cdw10:00000000 cdw11:00000000 00:23:11.460 [2024-11-20 07:21:33.682699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.460 [2024-11-20 07:21:33.682704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.460 [2024-11-20 07:21:33.682714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 07:21:33.682719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.460 the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.460 [2024-11-20 07:21:33.682733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 07:21:33.682739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.460 the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.460 [2024-11-20 07:21:33.682752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 07:21:33.682757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.460 the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1426c00 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39d70 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.682834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.460 [2024-11-20 07:21:33.682845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.460 [2024-11-20 07:21:33.682853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.460 [2024-11-20 07:21:33.682861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.460 [2024-11-20 07:21:33.682869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.460 [2024-11-20 07:21:33.682876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.460 [2024-11-20 07:21:33.682885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.460 [2024-11-20 07:21:33.682892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.460 [2024-11-20 07:21:33.682899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfac850 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.684177] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.460 [2024-11-20 07:21:33.685897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.685924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.685930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.685936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.685940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.685946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.460 [2024-11-20 07:21:33.685951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.685956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.685961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.685966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.685970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.685975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.685980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.685985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.685990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.685995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.686240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(6) to be set 00:23:11.461 [2024-11-20 07:21:33.687335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.461 [2024-11-20 07:21:33.687356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.461 [2024-11-20 07:21:33.687374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.461 [2024-11-20 07:21:33.687383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.461 [2024-11-20 07:21:33.687393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.461 [2024-11-20 07:21:33.687401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.461 [2024-11-20 07:21:33.687411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.461 [2024-11-20 07:21:33.687418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.461 [2024-11-20 07:21:33.687428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.461 [2024-11-20 07:21:33.687436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.461 [2024-11-20 07:21:33.687445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.461 [2024-11-20 07:21:33.687453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.687989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.687999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.688007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.688016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.688024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.688034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.688041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.688050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.688058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.688067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.688074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.688084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.688092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.688101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.462 [2024-11-20 07:21:33.688108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.462 [2024-11-20 07:21:33.688118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.463 [2024-11-20 07:21:33.688465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.463 [2024-11-20 07:21:33.688473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b2bd0 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.463 [2024-11-20 07:21:33.689315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.689320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.689325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.689331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.689336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.689340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.689344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.689349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.689354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.689358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.689363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.689367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.689372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.689377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.689381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a730 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:11.464 [2024-11-20 07:21:33.690115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa9d50 (9): Bad file descriptor 00:23:11.464 [2024-11-20 07:21:33.690143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.690434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ac00 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.691170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.464 [2024-11-20 07:21:33.691196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa9d50 with addr=10.0.0.2, port=4420 00:23:11.464 [2024-11-20 07:21:33.691206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9d50 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.691359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.464 [2024-11-20 07:21:33.691374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa9d50 (9): B[2024-11-20 07:21:33.691454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with ad file descriptor 00:23:11.465 the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691514] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.465 [2024-11-20 07:21:33.691518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:11.465 [2024-11-20 07:21:33.691660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:11.465 [2024-11-20 07:21:33.691671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:11.465 [2024-11-20 07:21:33.691676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b0d0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.691686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:11.465 [2024-11-20 07:21:33.691885] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.465 [2024-11-20 07:21:33.692351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.692366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.692371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.692376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.692380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.692385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.692390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.692395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.692400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.692404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.692409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.692414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.692419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.692423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.465 [2024-11-20 07:21:33.692431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692454] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.466 [2024-11-20 07:21:33.692461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfaccb0 (9): Bad file descriptor 00:23:11.466 [2024-11-20 07:21:33.692522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.466 [2024-11-20 07:21:33.692555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.466 [2024-11-20 07:21:33.692570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.466 [2024-11-20 07:21:33.692581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 07:21:33.692586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.466 the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.466 [2024-11-20 07:21:33.692598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with [2024-11-20 07:21:33.692603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:23:11.466 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.466 [2024-11-20 07:21:33.692611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.466 [2024-11-20 07:21:33.692616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.466 [2024-11-20 07:21:33.692627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efe40 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140cb00 (9): [2024-11-20 07:21:33.692658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with Bad file descriptor 00:23:11.466 the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1426c00 (9): Bad file descriptor 00:23:11.466 [2024-11-20 07:21:33.692686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b5c0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.466 [2024-11-20 07:21:33.692716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.466 [2024-11-20 07:21:33.692724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.466 [2024-11-20 07:21:33.692732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.466 [2024-11-20 07:21:33.692740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.466 [2024-11-20 07:21:33.692747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.466 [2024-11-20 07:21:33.692755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.466 [2024-11-20 07:21:33.692762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.466 [2024-11-20 07:21:33.692769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc9f0 is same with the state(6) to be set 00:23:11.466 [2024-11-20 07:21:33.692793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.467 [2024-11-20 07:21:33.692801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.692810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.467 [2024-11-20 07:21:33.692817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.692825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.467 [2024-11-20 07:21:33.692832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.692840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.467 [2024-11-20 07:21:33.692848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.692855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec4610 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.692883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.467 [2024-11-20 07:21:33.692892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.692900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.467 [2024-11-20 07:21:33.692907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.692915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.467 [2024-11-20 07:21:33.692922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.692930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.467 [2024-11-20 07:21:33.692937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.692944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaafc0 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.692961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfac850 (9): Bad file descriptor 00:23:11.467 [2024-11-20 07:21:33.693155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.693172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.693176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.693181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.693185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.693191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.693195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.693200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.693204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.693209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.693214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.693219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.693347] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.467 [2024-11-20 07:21:33.700486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:11.467 [2024-11-20 07:21:33.700750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.467 [2024-11-20 07:21:33.700764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa9d50 with addr=10.0.0.2, port=4420 00:23:11.467 [2024-11-20 07:21:33.700772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9d50 is same with the state(6) to be set 00:23:11.467 [2024-11-20 07:21:33.700820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa9d50 (9): Bad file descriptor 00:23:11.467 [2024-11-20 07:21:33.700864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:11.467 [2024-11-20 07:21:33.700872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:11.467 [2024-11-20 07:21:33.700880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:11.467 [2024-11-20 07:21:33.700887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:11.467 [2024-11-20 07:21:33.702543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efe40 (9): Bad file descriptor 00:23:11.467 [2024-11-20 07:21:33.702584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dc9f0 (9): Bad file descriptor 00:23:11.467 [2024-11-20 07:21:33.702601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec4610 (9): Bad file descriptor 00:23:11.467 [2024-11-20 07:21:33.702619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfaafc0 (9): Bad file descriptor 00:23:11.467 [2024-11-20 07:21:33.702725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.702986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.702994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.703003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.703011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.467 [2024-11-20 07:21:33.703020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.467 [2024-11-20 07:21:33.703027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.468 [2024-11-20 07:21:33.703678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.468 [2024-11-20 07:21:33.703685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.703694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.703702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.703711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.703718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.703729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.703736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.703746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.703753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.703762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.703769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.703778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.703786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.703795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.703802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.703810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149a0b0 is same with the state(6) to be set 00:23:11.469 [2024-11-20 07:21:33.705077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.469 [2024-11-20 07:21:33.705605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.469 [2024-11-20 07:21:33.705612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.705621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.470 [2024-11-20 07:21:33.705628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.705639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.470 [2024-11-20 07:21:33.705646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.705656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.470 [2024-11-20 07:21:33.705663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.705672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.470 [2024-11-20 07:21:33.705680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.705689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.470 [2024-11-20 07:21:33.705696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.705705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.470 [2024-11-20 07:21:33.705713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.705722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.470 [2024-11-20 07:21:33.705729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.705738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.470 [2024-11-20 07:21:33.705746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.705755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.470 [2024-11-20 07:21:33.705762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.705771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.470 [2024-11-20 07:21:33.705778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.705788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.470 [2024-11-20 07:21:33.705795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.705805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.470 [2024-11-20 07:21:33.705812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.705821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.470 [2024-11-20 07:21:33.705828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.705838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.470 [2024-11-20 07:21:33.705846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.470 [2024-11-20 07:21:33.706642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.470 [2024-11-20 07:21:33.706875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.471 [2024-11-20 07:21:33.706880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.471 [2024-11-20 07:21:33.706885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.471 [2024-11-20 07:21:33.706890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.471 [2024-11-20 07:21:33.706895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3ba90 is same with the state(6) to be set 00:23:11.471 [2024-11-20 07:21:33.713467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.713812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.713821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149b330 is same with the state(6) to be set 00:23:11.471 [2024-11-20 07:21:33.715136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.471 [2024-11-20 07:21:33.715417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.471 [2024-11-20 07:21:33.715425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.472 [2024-11-20 07:21:33.715926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.472 [2024-11-20 07:21:33.715937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.715944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.715953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.715961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.715970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.715978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.715987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.715994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.716011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.716027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.716044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.716061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.716077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.716094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.716111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.716128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.716146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.716166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.716182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.716199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.716215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.716232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.716240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b17d0 is same with the state(6) to be set 00:23:11.473 [2024-11-20 07:21:33.717548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3480 is same with the state(6) to be set 00:23:11.473 [2024-11-20 07:21:33.717694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.473 [2024-11-20 07:21:33.717895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.473 [2024-11-20 07:21:33.717905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.717912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.717921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.717929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.717938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.717945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.717954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.717963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.717973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.717980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.717989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.717996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.474 [2024-11-20 07:21:33.718494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.474 [2024-11-20 07:21:33.718503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.475 [2024-11-20 07:21:33.718779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.718787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ecf20 is same with the state(6) to be set 00:23:11.475 [2024-11-20 07:21:33.720040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:11.475 [2024-11-20 07:21:33.720055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:11.475 [2024-11-20 07:21:33.720065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:11.475 [2024-11-20 07:21:33.720173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.475 [2024-11-20 07:21:33.720185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.720194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.475 [2024-11-20 07:21:33.720202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.720210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.475 [2024-11-20 07:21:33.720217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.720225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.475 [2024-11-20 07:21:33.720233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.475 [2024-11-20 07:21:33.720240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efc60 is same with the state(6) to be set 00:23:11.475 [2024-11-20 07:21:33.720253] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:11.744 [2024-11-20 07:21:33.721259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:11.744 [2024-11-20 07:21:33.721300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:11.744 [2024-11-20 07:21:33.721316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efc60 (9): Bad file descriptor 00:23:11.744 [2024-11-20 07:21:33.721688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.744 [2024-11-20 07:21:33.721703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaccb0 with addr=10.0.0.2, port=4420 00:23:11.744 [2024-11-20 07:21:33.721712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaccb0 is same with the state(6) to be set 00:23:11.744 [2024-11-20 07:21:33.721780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.744 [2024-11-20 07:21:33.721790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfac850 with addr=10.0.0.2, port=4420 00:23:11.744 [2024-11-20 07:21:33.721798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfac850 is same with the state(6) to be set 00:23:11.744 [2024-11-20 07:21:33.722083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.744 [2024-11-20 07:21:33.722093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1426c00 with addr=10.0.0.2, port=4420 00:23:11.744 [2024-11-20 07:21:33.722101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1426c00 is same with the state(6) to be set 00:23:11.744 [2024-11-20 07:21:33.722925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.722939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.722950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.722958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.722967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.722979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.722988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.722996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.744 [2024-11-20 07:21:33.723853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.744 [2024-11-20 07:21:33.723860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.723870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.723877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.723886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.723894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.723903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.723910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.723920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.723927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.723937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.723944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.723954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.723961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.723970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.723978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.723988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.723996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.724005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.724013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.724021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae020 is same with the state(6) to be set 00:23:11.745 [2024-11-20 07:21:33.725296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.725984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.725993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.726001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.726010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.726017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.726027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.726034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.726043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.726051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.726060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.726067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.726076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.726084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.726093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.726100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.726110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.726117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.726126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.726134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.726143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.726151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.726163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.726171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.726180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.726188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.726198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.726205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.726215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.745 [2024-11-20 07:21:33.726222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.745 [2024-11-20 07:21:33.726232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.726239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.726248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.726256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.726266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.726274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.726283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.726290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.726299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.726307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.726316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.726323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.726333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.726340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.726349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.726357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.726366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.726373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.726383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.726390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.726399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.726408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.726416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af550 is same with the state(6) to be set 00:23:11.746 [2024-11-20 07:21:33.727683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.727988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.727998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.746 [2024-11-20 07:21:33.728494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.746 [2024-11-20 07:21:33.728501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.728775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.728784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0a40 is same with the state(6) to be set 00:23:11.747 [2024-11-20 07:21:33.730044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.747 [2024-11-20 07:21:33.730903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.747 [2024-11-20 07:21:33.730910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.730920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.748 [2024-11-20 07:21:33.730927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.730937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.748 [2024-11-20 07:21:33.730944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.730954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.748 [2024-11-20 07:21:33.730961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.730970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.748 [2024-11-20 07:21:33.730977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.730987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.748 [2024-11-20 07:21:33.730994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.731003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.748 [2024-11-20 07:21:33.731011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.731021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.748 [2024-11-20 07:21:33.731028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.731038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.748 [2024-11-20 07:21:33.731045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.731054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.748 [2024-11-20 07:21:33.731062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.731071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.748 [2024-11-20 07:21:33.731079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.731090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.748 [2024-11-20 07:21:33.731097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.731106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.748 [2024-11-20 07:21:33.731114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.731123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.748 [2024-11-20 07:21:33.731131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.731141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.748 [2024-11-20 07:21:33.731148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.748 [2024-11-20 07:21:33.731157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1fe0 is same with the state(6) to be set 00:23:11.748 [2024-11-20 07:21:33.732953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:11.748 [2024-11-20 07:21:33.732979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:11.748 [2024-11-20 07:21:33.732989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:11.748 [2024-11-20 07:21:33.732999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:11.748 task offset: 24576 on job bdev=Nvme4n1 fails 00:23:11.748 00:23:11.748 Latency(us) 00:23:11.748 [2024-11-20T06:21:34.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.748 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.748 Job: Nvme1n1 ended in about 0.95 seconds with error 00:23:11.748 Verification LBA range: start 0x0 length 0x400 00:23:11.748 Nvme1n1 : 0.95 134.16 8.39 67.08 0.00 314396.44 31675.73 237677.23 00:23:11.748 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.748 Job: Nvme2n1 ended in about 0.96 seconds with error 00:23:11.748 Verification LBA range: start 0x0 length 0x400 00:23:11.748 Nvme2n1 : 0.96 132.77 8.30 66.38 0.00 311315.34 21189.97 255153.49 00:23:11.748 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.748 Job: Nvme3n1 ended in about 0.97 seconds with error 00:23:11.748 Verification LBA range: start 0x0 length 0x400 00:23:11.748 Nvme3n1 : 0.97 202.80 12.67 66.22 0.00 225656.20 14964.05 242920.11 00:23:11.748 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.748 Job: Nvme4n1 ended in about 0.94 seconds with error 00:23:11.748 Verification LBA range: start 0x0 length 0x400 00:23:11.748 Nvme4n1 : 0.94 204.48 12.78 68.16 0.00 217485.65 15291.73 249910.61 00:23:11.748 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.748 Job: Nvme5n1 ended in about 0.97 seconds with error 00:23:11.748 Verification LBA range: start 0x0 length 0x400 00:23:11.748 Nvme5n1 : 0.97 197.08 12.32 65.69 0.00 221421.76 12506.45 249910.61 00:23:11.748 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.748 Job: Nvme6n1 ended in about 0.98 seconds with error 00:23:11.748 Verification LBA range: start 0x0 length 0x400 00:23:11.748 Nvme6n1 : 0.98 196.59 12.29 65.53 0.00 217192.11 21299.20 284863.15 00:23:11.748 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.748 Job: Nvme7n1 ended in about 0.98 seconds with error 00:23:11.748 Verification LBA range: start 0x0 length 0x400 00:23:11.748 Nvme7n1 : 0.98 196.12 12.26 65.37 0.00 212957.23 20316.16 242920.11 00:23:11.748 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.748 Job: Nvme8n1 ended in about 0.98 seconds with error 00:23:11.748 Verification LBA range: start 0x0 length 0x400 00:23:11.748 Nvme8n1 : 0.98 195.65 12.23 65.22 0.00 208713.39 21954.56 225443.84 00:23:11.748 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.748 Job: Nvme9n1 ended in about 0.97 seconds with error 00:23:11.748 Verification LBA range: start 0x0 length 0x400 00:23:11.748 Nvme9n1 : 0.97 194.79 12.17 3.09 0.00 259845.12 18786.99 262144.00 00:23:11.748 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.748 Job: Nvme10n1 ended in about 0.97 seconds with error 00:23:11.748 Verification LBA range: start 0x0 length 0x400 00:23:11.748 Nvme10n1 : 0.97 132.09 8.26 66.05 0.00 261582.79 15947.09 244667.73 00:23:11.748 [2024-11-20T06:21:34.026Z] =================================================================================================================== 00:23:11.748 [2024-11-20T06:21:34.026Z] Total : 1786.52 111.66 598.79 0.00 240394.54 12506.45 284863.15 00:23:11.748 [2024-11-20 07:21:33.758437] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:11.748 [2024-11-20 07:21:33.758897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.748 [2024-11-20 07:21:33.758920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140cb00 with addr=10.0.0.2, port=4420 00:23:11.748 [2024-11-20 07:21:33.758932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb00 is same with the state(6) to be set 00:23:11.748 [2024-11-20 07:21:33.758953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfaccb0 (9): Bad file descriptor 00:23:11.748 [2024-11-20 07:21:33.758966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfac850 (9): Bad file descriptor 00:23:11.748 [2024-11-20 07:21:33.758976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1426c00 (9): Bad file descriptor 00:23:11.748 [2024-11-20 07:21:33.759005] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:11.748 [2024-11-20 07:21:33.759024] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:11.748 [2024-11-20 07:21:33.759035] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:11.748 [2024-11-20 07:21:33.759046] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:11.748 [2024-11-20 07:21:33.759056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140cb00 (9): Bad file descriptor 00:23:11.748 [2024-11-20 07:21:33.759440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:11.748 [2024-11-20 07:21:33.759783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.748 [2024-11-20 07:21:33.759798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efc60 with addr=10.0.0.2, port=4420 00:23:11.748 [2024-11-20 07:21:33.759806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efc60 is same with the state(6) to be set 00:23:11.748 [2024-11-20 07:21:33.760016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.748 [2024-11-20 07:21:33.760028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa9d50 with addr=10.0.0.2, port=4420 00:23:11.748 [2024-11-20 07:21:33.760036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9d50 is same with the state(6) to be set 00:23:11.748 [2024-11-20 07:21:33.760236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.748 [2024-11-20 07:21:33.760247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaafc0 with addr=10.0.0.2, port=4420 00:23:11.748 [2024-11-20 07:21:33.760254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaafc0 is same with the state(6) to be set 00:23:11.748 [2024-11-20 07:21:33.760611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.748 [2024-11-20 07:21:33.760620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13dc9f0 with addr=10.0.0.2, port=4420 00:23:11.748 [2024-11-20 07:21:33.760627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc9f0 is same with the state(6) to be set 00:23:11.748 [2024-11-20 07:21:33.760927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.748 [2024-11-20 07:21:33.760937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec4610 with addr=10.0.0.2, port=4420 00:23:11.748 [2024-11-20 07:21:33.760944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec4610 is same with the state(6) to be set 00:23:11.748 [2024-11-20 07:21:33.760953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:11.748 [2024-11-20 07:21:33.760960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:11.748 [2024-11-20 07:21:33.760969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:11.748 [2024-11-20 07:21:33.760977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:11.748 [2024-11-20 07:21:33.760986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:11.748 [2024-11-20 07:21:33.760992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:11.748 [2024-11-20 07:21:33.760999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:11.748 [2024-11-20 07:21:33.761006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:11.748 [2024-11-20 07:21:33.761013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:11.748 [2024-11-20 07:21:33.761019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:11.748 [2024-11-20 07:21:33.761026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:11.748 [2024-11-20 07:21:33.761033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:11.748 [2024-11-20 07:21:33.761070] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:11.748 [2024-11-20 07:21:33.762670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.748 [2024-11-20 07:21:33.762689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efe40 with addr=10.0.0.2, port=4420 00:23:11.748 [2024-11-20 07:21:33.762697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efe40 is same with the state(6) to be set 00:23:11.748 [2024-11-20 07:21:33.762708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efc60 (9): Bad file descriptor 00:23:11.748 [2024-11-20 07:21:33.762719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa9d50 (9): Bad file descriptor 00:23:11.748 [2024-11-20 07:21:33.762728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfaafc0 (9): Bad file descriptor 00:23:11.748 [2024-11-20 07:21:33.762738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dc9f0 (9): Bad file descriptor 00:23:11.748 [2024-11-20 07:21:33.762751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec4610 (9): Bad file descriptor 00:23:11.748 [2024-11-20 07:21:33.762759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:11.748 [2024-11-20 07:21:33.762766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:11.748 [2024-11-20 07:21:33.762773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:11.748 [2024-11-20 07:21:33.762780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:11.748 [2024-11-20 07:21:33.762850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:11.748 [2024-11-20 07:21:33.762862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:11.748 [2024-11-20 07:21:33.762871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:11.748 [2024-11-20 07:21:33.762899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efe40 (9): Bad file descriptor 00:23:11.748 [2024-11-20 07:21:33.762908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:11.748 [2024-11-20 07:21:33.762914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:11.748 [2024-11-20 07:21:33.762921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:11.748 [2024-11-20 07:21:33.762928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:11.748 [2024-11-20 07:21:33.762935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:11.748 [2024-11-20 07:21:33.762941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:11.748 [2024-11-20 07:21:33.762948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:11.748 [2024-11-20 07:21:33.762955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:11.748 [2024-11-20 07:21:33.762962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:11.748 [2024-11-20 07:21:33.762968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:11.748 [2024-11-20 07:21:33.762975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:11.748 [2024-11-20 07:21:33.762981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:11.748 [2024-11-20 07:21:33.762988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:11.748 [2024-11-20 07:21:33.762994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:11.748 [2024-11-20 07:21:33.763001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:11.748 [2024-11-20 07:21:33.763007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:11.748 [2024-11-20 07:21:33.763015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:11.748 [2024-11-20 07:21:33.763021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:11.748 [2024-11-20 07:21:33.763027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:11.748 [2024-11-20 07:21:33.763034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:11.748 [2024-11-20 07:21:33.763340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.748 [2024-11-20 07:21:33.763354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1426c00 with addr=10.0.0.2, port=4420 00:23:11.748 [2024-11-20 07:21:33.763361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1426c00 is same with the state(6) to be set 00:23:11.748 [2024-11-20 07:21:33.763643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.748 [2024-11-20 07:21:33.763652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfac850 with addr=10.0.0.2, port=4420 00:23:11.748 [2024-11-20 07:21:33.763660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfac850 is same with the state(6) to be set 00:23:11.748 [2024-11-20 07:21:33.763972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.748 [2024-11-20 07:21:33.763982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaccb0 with addr=10.0.0.2, port=4420 00:23:11.748 [2024-11-20 07:21:33.763989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaccb0 is same with the state(6) to be set 00:23:11.748 [2024-11-20 07:21:33.763996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:11.748 [2024-11-20 07:21:33.764003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:11.748 [2024-11-20 07:21:33.764010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:11.748 [2024-11-20 07:21:33.764016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:11.748 [2024-11-20 07:21:33.764044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1426c00 (9): Bad file descriptor 00:23:11.748 [2024-11-20 07:21:33.764055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfac850 (9): Bad file descriptor 00:23:11.748 [2024-11-20 07:21:33.764064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfaccb0 (9): Bad file descriptor 00:23:11.748 [2024-11-20 07:21:33.764100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:11.748 [2024-11-20 07:21:33.764108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:11.749 [2024-11-20 07:21:33.764116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:11.749 [2024-11-20 07:21:33.764122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:11.749 [2024-11-20 07:21:33.764129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:11.749 [2024-11-20 07:21:33.764136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:11.749 [2024-11-20 07:21:33.764143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:11.749 [2024-11-20 07:21:33.764149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:11.749 [2024-11-20 07:21:33.764157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:11.749 [2024-11-20 07:21:33.764169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:11.749 [2024-11-20 07:21:33.764176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:11.749 [2024-11-20 07:21:33.764183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:11.749 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3587254 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3587254 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3587254 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:12.692 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:12.954 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:12.954 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.954 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:12.954 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.954 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:12.954 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.954 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.954 rmmod nvme_tcp 00:23:12.954 rmmod nvme_fabrics 00:23:12.954 rmmod nvme_keyring 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3586867 ']' 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3586867 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3586867 ']' 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3586867 00:23:12.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3586867) - No such process 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3586867 is not found' 00:23:12.954 Process with pid 3586867 is not found 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.954 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.867 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.867 00:23:14.867 real 0m7.735s 00:23:14.867 user 0m18.748s 00:23:14.867 sys 0m1.314s 00:23:14.867 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:14.867 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.867 ************************************ 00:23:14.867 END TEST nvmf_shutdown_tc3 00:23:14.867 ************************************ 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:15.127 ************************************ 00:23:15.127 START TEST nvmf_shutdown_tc4 00:23:15.127 ************************************ 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:15.127 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:15.127 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.127 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:15.128 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:15.128 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.128 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:15.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:23:15.388 00:23:15.388 --- 10.0.0.2 ping statistics --- 00:23:15.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.388 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:23:15.388 00:23:15.388 --- 10.0.0.1 ping statistics --- 00:23:15.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.388 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3588469 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3588469 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 3588469 ']' 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:15.388 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:15.388 [2024-11-20 07:21:37.652059] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:23:15.388 [2024-11-20 07:21:37.652128] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.648 [2024-11-20 07:21:37.754316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:15.648 [2024-11-20 07:21:37.806347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.648 [2024-11-20 07:21:37.806402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.648 [2024-11-20 07:21:37.806412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.648 [2024-11-20 07:21:37.806419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.648 [2024-11-20 07:21:37.806425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.648 [2024-11-20 07:21:37.808493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.648 [2024-11-20 07:21:37.808638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.648 [2024-11-20 07:21:37.808801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:15.648 [2024-11-20 07:21:37.808803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.218 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:16.218 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:23:16.218 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:16.218 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:16.218 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:16.478 [2024-11-20 07:21:38.500739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.478 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:16.478 Malloc1 00:23:16.478 [2024-11-20 07:21:38.610348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.478 Malloc2 00:23:16.478 Malloc3 00:23:16.478 Malloc4 00:23:16.478 Malloc5 00:23:16.738 Malloc6 00:23:16.738 Malloc7 00:23:16.738 Malloc8 00:23:16.738 Malloc9 00:23:16.738 Malloc10 00:23:16.738 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.739 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:16.739 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:16.739 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:16.739 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3588782 00:23:16.998 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:16.998 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:16.998 [2024-11-20 07:21:39.105792] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:22.284 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:22.284 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3588469 00:23:22.284 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3588469 ']' 00:23:22.284 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3588469 00:23:22.284 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:23:22.284 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:22.284 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3588469 00:23:22.284 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:22.284 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:22.284 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3588469' 00:23:22.284 killing process with pid 3588469 00:23:22.284 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 3588469 00:23:22.284 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 3588469 00:23:22.284 [2024-11-20 07:21:44.098185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10affe0 is same with the state(6) to be set 00:23:22.284 [2024-11-20 07:21:44.098227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10affe0 is same with the state(6) to be set 00:23:22.284 [2024-11-20 07:21:44.098233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10affe0 is same with the state(6) to be set 00:23:22.284 [2024-11-20 07:21:44.098238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10affe0 is same with the state(6) to be set 00:23:22.284 [2024-11-20 07:21:44.098243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10affe0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.098248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10affe0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.098458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b04d0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.098485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b04d0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.098492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b04d0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.098498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b04d0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.098503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b04d0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.098508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b04d0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.098513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b04d0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.098518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b04d0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.098820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b09c0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.098846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b09c0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.098852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b09c0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.098857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b09c0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.098863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b09c0 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.099229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10afb10 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.099264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10afb10 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.099272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10afb10 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.099280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10afb10 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.099286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10afb10 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.099293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10afb10 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.099300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10afb10 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.099307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10afb10 is same with the state(6) to be set 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 [2024-11-20 07:21:44.101470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 [2024-11-20 07:21:44.102333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.285 starting I/O failed: -6 00:23:22.285 [2024-11-20 07:21:44.102740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1380 is same with Write completed with error (sct=0, sc=8) 00:23:22.285 the state(6) to be set 00:23:22.285 starting I/O failed: -6 00:23:22.285 [2024-11-20 07:21:44.102758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1380 is same with the state(6) to be set 00:23:22.285 [2024-11-20 07:21:44.102764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1380 is same with the state(6) to be set 00:23:22.285 Write completed with error (sct=0, sc=8) 00:23:22.286 [2024-11-20 07:21:44.102769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1380 is same with the state(6) to be set 00:23:22.286 [2024-11-20 07:21:44.102775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1380 is same with the state(6) to be set 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 [2024-11-20 07:21:44.103030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1850 is same with starting I/O failed: -6 00:23:22.286 the state(6) to be set 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 [2024-11-20 07:21:44.103049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1850 is same with the state(6) to be set 00:23:22.286 starting I/O failed: -6 00:23:22.286 [2024-11-20 07:21:44.103055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1850 is same with the state(6) to be set 00:23:22.286 [2024-11-20 07:21:44.103061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1850 is same with the state(6) to be set 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 [2024-11-20 07:21:44.103066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1850 is same with the state(6) to be set 00:23:22.286 starting I/O failed: -6 00:23:22.286 [2024-11-20 07:21:44.103071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1850 is same with the state(6) to be set 00:23:22.286 [2024-11-20 07:21:44.103076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1850 is same with the state(6) to be set 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 [2024-11-20 07:21:44.103081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1850 is same with the state(6) to be set 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 [2024-11-20 07:21:44.103255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 [2024-11-20 07:21:44.103424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b0eb0 is same with the state(6) to be set 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 [2024-11-20 07:21:44.103444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b0eb0 is same with the state(6) to be set 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 [2024-11-20 07:21:44.103452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b0eb0 is same with the state(6) to be set 00:23:22.286 starting I/O failed: -6 00:23:22.286 [2024-11-20 07:21:44.103459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b0eb0 is same with the state(6) to be set 00:23:22.286 [2024-11-20 07:21:44.103467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b0eb0 is same with Write completed with error (sct=0, sc=8) 00:23:22.286 the state(6) to be set 00:23:22.286 [2024-11-20 07:21:44.103475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b0eb0 is same with starting I/O failed: -6 00:23:22.286 the state(6) to be set 00:23:22.286 [2024-11-20 07:21:44.103483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b0eb0 is same with the state(6) to be set 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 [2024-11-20 07:21:44.103490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b0eb0 is same with the state(6) to be set 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.286 Write completed with error (sct=0, sc=8) 00:23:22.286 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 [2024-11-20 07:21:44.104501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b26e0 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.104515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b26e0 is same with the state(6) to be set 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 [2024-11-20 07:21:44.104677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:22.287 NVMe io qpair process completion error 00:23:22.287 [2024-11-20 07:21:44.104693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2bd0 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.104711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2bd0 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.104716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2bd0 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.104721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2bd0 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.104726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2bd0 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.104730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2bd0 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.104916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b30c0 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.104934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b30c0 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.104939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b30c0 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.104944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b30c0 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.104949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b30c0 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.105257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2210 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.105275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2210 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.105282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2210 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.105289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2210 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.105295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2210 is same with the state(6) to be set 00:23:22.287 [2024-11-20 07:21:44.105302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2210 is same with the state(6) to be set 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 [2024-11-20 07:21:44.105929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 [2024-11-20 07:21:44.106754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.287 starting I/O failed: -6 00:23:22.287 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 [2024-11-20 07:21:44.107688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 [2024-11-20 07:21:44.109322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:22.288 NVMe io qpair process completion error 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 starting I/O failed: -6 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.288 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 [2024-11-20 07:21:44.110514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 [2024-11-20 07:21:44.111334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 [2024-11-20 07:21:44.112260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.289 Write completed with error (sct=0, sc=8) 00:23:22.289 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 [2024-11-20 07:21:44.113990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:22.290 NVMe io qpair process completion error 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 [2024-11-20 07:21:44.115103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 starting I/O failed: -6 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.290 Write completed with error (sct=0, sc=8) 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 [2024-11-20 07:21:44.115901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 [2024-11-20 07:21:44.116829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.291 Write completed with error (sct=0, sc=8) 00:23:22.291 starting I/O failed: -6 00:23:22.292 [2024-11-20 07:21:44.119301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:22.292 NVMe io qpair process completion error 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 [2024-11-20 07:21:44.120461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 [2024-11-20 07:21:44.121264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 [2024-11-20 07:21:44.122193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.292 Write completed with error (sct=0, sc=8) 00:23:22.292 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 [2024-11-20 07:21:44.123782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:22.293 NVMe io qpair process completion error 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 [2024-11-20 07:21:44.124822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.293 starting I/O failed: -6 00:23:22.293 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 [2024-11-20 07:21:44.125637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 [2024-11-20 07:21:44.126568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.294 starting I/O failed: -6 00:23:22.294 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 [2024-11-20 07:21:44.129452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:22.295 NVMe io qpair process completion error 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 [2024-11-20 07:21:44.130564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:22.295 starting I/O failed: -6 00:23:22.295 starting I/O failed: -6 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 [2024-11-20 07:21:44.131534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.295 Write completed with error (sct=0, sc=8) 00:23:22.295 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 [2024-11-20 07:21:44.132452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 [2024-11-20 07:21:44.133879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:22.296 NVMe io qpair process completion error 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 [2024-11-20 07:21:44.134990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:22.296 starting I/O failed: -6 00:23:22.296 starting I/O failed: -6 00:23:22.296 starting I/O failed: -6 00:23:22.296 starting I/O failed: -6 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 Write completed with error (sct=0, sc=8) 00:23:22.296 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 [2024-11-20 07:21:44.135984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 [2024-11-20 07:21:44.136900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.297 Write completed with error (sct=0, sc=8) 00:23:22.297 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 [2024-11-20 07:21:44.138337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:22.298 NVMe io qpair process completion error 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 [2024-11-20 07:21:44.139582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 [2024-11-20 07:21:44.140480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.298 Write completed with error (sct=0, sc=8) 00:23:22.298 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 [2024-11-20 07:21:44.141387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 [2024-11-20 07:21:44.144215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:22.299 NVMe io qpair process completion error 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 starting I/O failed: -6 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.299 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 [2024-11-20 07:21:44.145506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 [2024-11-20 07:21:44.146337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 [2024-11-20 07:21:44.147267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.300 Write completed with error (sct=0, sc=8) 00:23:22.300 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 Write completed with error (sct=0, sc=8) 00:23:22.301 starting I/O failed: -6 00:23:22.301 [2024-11-20 07:21:44.149167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:22.301 NVMe io qpair process completion error 00:23:22.301 Initializing NVMe Controllers 00:23:22.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:22.301 Controller IO queue size 128, less than required. 00:23:22.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:22.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:22.301 Controller IO queue size 128, less than required. 00:23:22.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:22.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:22.301 Controller IO queue size 128, less than required. 00:23:22.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:22.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:22.301 Controller IO queue size 128, less than required. 00:23:22.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:22.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:22.301 Controller IO queue size 128, less than required. 00:23:22.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:22.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:22.301 Controller IO queue size 128, less than required. 00:23:22.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:22.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:22.301 Controller IO queue size 128, less than required. 00:23:22.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:22.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:22.301 Controller IO queue size 128, less than required. 00:23:22.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:22.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:22.301 Controller IO queue size 128, less than required. 00:23:22.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:22.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:22.301 Controller IO queue size 128, less than required. 00:23:22.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:22.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:22.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:22.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:22.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:22.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:22.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:22.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:22.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:22.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:22.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:22.301 Initialization complete. Launching workers. 00:23:22.301 ======================================================== 00:23:22.301 Latency(us) 00:23:22.301 Device Information : IOPS MiB/s Average min max 00:23:22.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1869.71 80.34 68476.89 673.11 124160.97 00:23:22.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1880.73 80.81 68094.17 693.86 127916.63 00:23:22.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1871.37 80.41 68455.15 915.60 122451.82 00:23:22.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1894.45 81.40 67659.20 864.07 132436.36 00:23:22.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1908.16 81.99 66517.22 680.86 123421.04 00:23:22.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1867.01 80.22 68002.67 612.60 122003.42 00:23:22.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1886.55 81.06 67317.70 716.05 119240.29 00:23:22.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1867.22 80.23 68036.20 830.47 119288.56 00:23:22.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1854.12 79.67 68547.25 804.33 122632.37 00:23:22.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1913.98 82.24 66421.17 672.20 117912.59 00:23:22.301 ======================================================== 00:23:22.301 Total : 18813.30 808.38 67746.12 612.60 132436.36 00:23:22.301 00:23:22.301 [2024-11-20 07:21:44.151898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c5740 is same with the state(6) to be set 00:23:22.301 [2024-11-20 07:21:44.151942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c4890 is same with the state(6) to be set 00:23:22.301 [2024-11-20 07:21:44.151971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c5410 is same with the state(6) to be set 00:23:22.301 [2024-11-20 07:21:44.151999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c5a70 is same with the state(6) to be set 00:23:22.301 [2024-11-20 07:21:44.152028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c6ae0 is same with the state(6) to be set 00:23:22.301 [2024-11-20 07:21:44.152056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c4560 is same with the state(6) to be set 00:23:22.301 [2024-11-20 07:21:44.152085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c6900 is same with the state(6) to be set 00:23:22.301 [2024-11-20 07:21:44.152112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c6720 is same with the state(6) to be set 00:23:22.301 [2024-11-20 07:21:44.152140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c4bc0 is same with the state(6) to be set 00:23:22.301 [2024-11-20 07:21:44.152175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c4ef0 is same with the state(6) to be set 00:23:22.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:22.301 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3588782 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3588782 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3588782 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:23.244 rmmod nvme_tcp 00:23:23.244 rmmod nvme_fabrics 00:23:23.244 rmmod nvme_keyring 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3588469 ']' 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3588469 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3588469 ']' 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3588469 00:23:23.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3588469) - No such process 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3588469 is not found' 00:23:23.244 Process with pid 3588469 is not found 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.244 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:25.789 00:23:25.789 real 0m10.311s 00:23:25.789 user 0m27.976s 00:23:25.789 sys 0m4.037s 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:25.789 ************************************ 00:23:25.789 END TEST nvmf_shutdown_tc4 00:23:25.789 ************************************ 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:25.789 00:23:25.789 real 0m43.396s 00:23:25.789 user 1m44.712s 00:23:25.789 sys 0m13.948s 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:25.789 ************************************ 00:23:25.789 END TEST nvmf_shutdown 00:23:25.789 ************************************ 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:25.789 ************************************ 00:23:25.789 START TEST nvmf_nsid 00:23:25.789 ************************************ 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:25.789 * Looking for test storage... 00:23:25.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.789 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:25.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.789 --rc genhtml_branch_coverage=1 00:23:25.789 --rc genhtml_function_coverage=1 00:23:25.789 --rc genhtml_legend=1 00:23:25.789 --rc geninfo_all_blocks=1 00:23:25.789 --rc geninfo_unexecuted_blocks=1 00:23:25.789 00:23:25.789 ' 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:25.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.790 --rc genhtml_branch_coverage=1 00:23:25.790 --rc genhtml_function_coverage=1 00:23:25.790 --rc genhtml_legend=1 00:23:25.790 --rc geninfo_all_blocks=1 00:23:25.790 --rc geninfo_unexecuted_blocks=1 00:23:25.790 00:23:25.790 ' 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:25.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.790 --rc genhtml_branch_coverage=1 00:23:25.790 --rc genhtml_function_coverage=1 00:23:25.790 --rc genhtml_legend=1 00:23:25.790 --rc geninfo_all_blocks=1 00:23:25.790 --rc geninfo_unexecuted_blocks=1 00:23:25.790 00:23:25.790 ' 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:25.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.790 --rc genhtml_branch_coverage=1 00:23:25.790 --rc genhtml_function_coverage=1 00:23:25.790 --rc genhtml_legend=1 00:23:25.790 --rc geninfo_all_blocks=1 00:23:25.790 --rc geninfo_unexecuted_blocks=1 00:23:25.790 00:23:25.790 ' 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:25.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:25.790 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:33.927 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.927 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:33.928 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:33.928 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:33.928 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:33.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:23:33.928 00:23:33.928 --- 10.0.0.2 ping statistics --- 00:23:33.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.928 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:23:33.928 00:23:33.928 --- 10.0.0.1 ping statistics --- 00:23:33.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.928 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3594169 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3594169 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3594169 ']' 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:33.928 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:33.928 [2024-11-20 07:21:55.427086] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:23:33.928 [2024-11-20 07:21:55.427150] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.928 [2024-11-20 07:21:55.524818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.928 [2024-11-20 07:21:55.576741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.928 [2024-11-20 07:21:55.576794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.928 [2024-11-20 07:21:55.576808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.928 [2024-11-20 07:21:55.576815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.928 [2024-11-20 07:21:55.576821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.928 [2024-11-20 07:21:55.577611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3594475 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=bde9d2c0-ec22-4009-a780-d0b790acda69 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=c9c8dcd7-714e-4ce8-8c57-3bbb62b2f4c5 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=2d0172a4-9f47-443e-aeb1-caa63c0681c8 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:34.189 null0 00:23:34.189 null1 00:23:34.189 null2 00:23:34.189 [2024-11-20 07:21:56.346489] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:23:34.189 [2024-11-20 07:21:56.346555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594475 ] 00:23:34.189 [2024-11-20 07:21:56.348688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.189 [2024-11-20 07:21:56.372944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3594475 /var/tmp/tgt2.sock 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3594475 ']' 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:34.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:34.189 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:34.189 [2024-11-20 07:21:56.437677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.449 [2024-11-20 07:21:56.490016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.710 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:34.710 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:34.710 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:34.971 [2024-11-20 07:21:57.047316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.971 [2024-11-20 07:21:57.063494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:34.971 nvme0n1 nvme0n2 00:23:34.971 nvme1n1 00:23:34.971 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:34.971 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:34.971 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:36.355 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:36.355 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:36.355 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:36.355 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:36.355 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:23:36.355 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:36.355 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:36.355 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:36.355 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:36.355 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:36.355 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:23:36.355 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:23:36.355 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid bde9d2c0-ec22-4009-a780-d0b790acda69 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=bde9d2c0ec224009a780d0b790acda69 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BDE9D2C0EC224009A780D0B790ACDA69 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ BDE9D2C0EC224009A780D0B790ACDA69 == \B\D\E\9\D\2\C\0\E\C\2\2\4\0\0\9\A\7\8\0\D\0\B\7\9\0\A\C\D\A\6\9 ]] 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid c9c8dcd7-714e-4ce8-8c57-3bbb62b2f4c5 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c9c8dcd7714e4ce88c573bbb62b2f4c5 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C9C8DCD7714E4CE88C573BBB62B2F4C5 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ C9C8DCD7714E4CE88C573BBB62B2F4C5 == \C\9\C\8\D\C\D\7\7\1\4\E\4\C\E\8\8\C\5\7\3\B\B\B\6\2\B\2\F\4\C\5 ]] 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 2d0172a4-9f47-443e-aeb1-caa63c0681c8 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2d0172a49f47443eaeb1caa63c0681c8 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2D0172A49F47443EAEB1CAA63C0681C8 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 2D0172A49F47443EAEB1CAA63C0681C8 == \2\D\0\1\7\2\A\4\9\F\4\7\4\4\3\E\A\E\B\1\C\A\A\6\3\C\0\6\8\1\C\8 ]] 00:23:37.740 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:37.740 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:37.740 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:37.740 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3594475 00:23:37.740 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3594475 ']' 00:23:37.740 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3594475 00:23:38.001 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:38.001 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:38.001 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3594475 00:23:38.001 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:38.001 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:38.001 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3594475' 00:23:38.001 killing process with pid 3594475 00:23:38.001 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3594475 00:23:38.001 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3594475 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:38.261 rmmod nvme_tcp 00:23:38.261 rmmod nvme_fabrics 00:23:38.261 rmmod nvme_keyring 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3594169 ']' 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3594169 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3594169 ']' 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3594169 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3594169 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:38.261 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3594169' 00:23:38.262 killing process with pid 3594169 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3594169 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3594169 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.262 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.809 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:40.809 00:23:40.809 real 0m14.946s 00:23:40.809 user 0m11.414s 00:23:40.809 sys 0m6.900s 00:23:40.809 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:40.809 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:40.809 ************************************ 00:23:40.809 END TEST nvmf_nsid 00:23:40.809 ************************************ 00:23:40.809 07:22:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:40.809 00:23:40.809 real 13m5.911s 00:23:40.809 user 27m21.525s 00:23:40.809 sys 3m57.164s 00:23:40.809 07:22:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:40.809 07:22:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:40.809 ************************************ 00:23:40.809 END TEST nvmf_target_extra 00:23:40.809 ************************************ 00:23:40.809 07:22:02 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:40.809 07:22:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:40.809 07:22:02 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:40.809 07:22:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:40.809 ************************************ 00:23:40.809 START TEST nvmf_host 00:23:40.809 ************************************ 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:40.809 * Looking for test storage... 00:23:40.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:40.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.809 --rc genhtml_branch_coverage=1 00:23:40.809 --rc genhtml_function_coverage=1 00:23:40.809 --rc genhtml_legend=1 00:23:40.809 --rc geninfo_all_blocks=1 00:23:40.809 --rc geninfo_unexecuted_blocks=1 00:23:40.809 00:23:40.809 ' 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:40.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.809 --rc genhtml_branch_coverage=1 00:23:40.809 --rc genhtml_function_coverage=1 00:23:40.809 --rc genhtml_legend=1 00:23:40.809 --rc geninfo_all_blocks=1 00:23:40.809 --rc geninfo_unexecuted_blocks=1 00:23:40.809 00:23:40.809 ' 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:40.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.809 --rc genhtml_branch_coverage=1 00:23:40.809 --rc genhtml_function_coverage=1 00:23:40.809 --rc genhtml_legend=1 00:23:40.809 --rc geninfo_all_blocks=1 00:23:40.809 --rc geninfo_unexecuted_blocks=1 00:23:40.809 00:23:40.809 ' 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:40.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.809 --rc genhtml_branch_coverage=1 00:23:40.809 --rc genhtml_function_coverage=1 00:23:40.809 --rc genhtml_legend=1 00:23:40.809 --rc geninfo_all_blocks=1 00:23:40.809 --rc geninfo_unexecuted_blocks=1 00:23:40.809 00:23:40.809 ' 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:40.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:40.809 07:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.809 ************************************ 00:23:40.809 START TEST nvmf_multicontroller 00:23:40.809 ************************************ 00:23:40.809 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:41.070 * Looking for test storage... 00:23:41.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:41.070 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:41.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.071 --rc genhtml_branch_coverage=1 00:23:41.071 --rc genhtml_function_coverage=1 00:23:41.071 --rc genhtml_legend=1 00:23:41.071 --rc geninfo_all_blocks=1 00:23:41.071 --rc geninfo_unexecuted_blocks=1 00:23:41.071 00:23:41.071 ' 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:41.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.071 --rc genhtml_branch_coverage=1 00:23:41.071 --rc genhtml_function_coverage=1 00:23:41.071 --rc genhtml_legend=1 00:23:41.071 --rc geninfo_all_blocks=1 00:23:41.071 --rc geninfo_unexecuted_blocks=1 00:23:41.071 00:23:41.071 ' 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:41.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.071 --rc genhtml_branch_coverage=1 00:23:41.071 --rc genhtml_function_coverage=1 00:23:41.071 --rc genhtml_legend=1 00:23:41.071 --rc geninfo_all_blocks=1 00:23:41.071 --rc geninfo_unexecuted_blocks=1 00:23:41.071 00:23:41.071 ' 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:41.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.071 --rc genhtml_branch_coverage=1 00:23:41.071 --rc genhtml_function_coverage=1 00:23:41.071 --rc genhtml_legend=1 00:23:41.071 --rc geninfo_all_blocks=1 00:23:41.071 --rc geninfo_unexecuted_blocks=1 00:23:41.071 00:23:41.071 ' 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:41.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:41.071 07:22:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.355 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:49.356 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:49.356 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:49.356 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:49.356 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:49.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:23:49.356 00:23:49.356 --- 10.0.0.2 ping statistics --- 00:23:49.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.356 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:23:49.356 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:23:49.356 00:23:49.356 --- 10.0.0.1 ping statistics --- 00:23:49.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.356 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3599576 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3599576 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3599576 ']' 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:49.357 07:22:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.357 [2024-11-20 07:22:10.868652] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:23:49.357 [2024-11-20 07:22:10.868722] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.357 [2024-11-20 07:22:10.970516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:49.357 [2024-11-20 07:22:11.022535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.357 [2024-11-20 07:22:11.022598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.357 [2024-11-20 07:22:11.022607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.357 [2024-11-20 07:22:11.022615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.357 [2024-11-20 07:22:11.022621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.357 [2024-11-20 07:22:11.024736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.357 [2024-11-20 07:22:11.024898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.357 [2024-11-20 07:22:11.024899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.619 [2024-11-20 07:22:11.752165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.619 Malloc0 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.619 [2024-11-20 07:22:11.833087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.619 [2024-11-20 07:22:11.844923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.619 Malloc1 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.619 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3599795 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3599795 /var/tmp/bdevperf.sock 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3599795 ']' 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:49.882 07:22:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.831 NVMe0n1 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.831 1 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.831 request: 00:23:50.831 { 00:23:50.831 "name": "NVMe0", 00:23:50.831 "trtype": "tcp", 00:23:50.831 "traddr": "10.0.0.2", 00:23:50.831 "adrfam": "ipv4", 00:23:50.831 "trsvcid": "4420", 00:23:50.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.831 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:50.831 "hostaddr": "10.0.0.1", 00:23:50.831 "prchk_reftag": false, 00:23:50.831 "prchk_guard": false, 00:23:50.831 "hdgst": false, 00:23:50.831 "ddgst": false, 00:23:50.831 "allow_unrecognized_csi": false, 00:23:50.831 "method": "bdev_nvme_attach_controller", 00:23:50.831 "req_id": 1 00:23:50.831 } 00:23:50.831 Got JSON-RPC error response 00:23:50.831 response: 00:23:50.831 { 00:23:50.831 "code": -114, 00:23:50.831 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:50.831 } 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.831 request: 00:23:50.831 { 00:23:50.831 "name": "NVMe0", 00:23:50.831 "trtype": "tcp", 00:23:50.831 "traddr": "10.0.0.2", 00:23:50.831 "adrfam": "ipv4", 00:23:50.831 "trsvcid": "4420", 00:23:50.831 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:50.831 "hostaddr": "10.0.0.1", 00:23:50.831 "prchk_reftag": false, 00:23:50.831 "prchk_guard": false, 00:23:50.831 "hdgst": false, 00:23:50.831 "ddgst": false, 00:23:50.831 "allow_unrecognized_csi": false, 00:23:50.831 "method": "bdev_nvme_attach_controller", 00:23:50.831 "req_id": 1 00:23:50.831 } 00:23:50.831 Got JSON-RPC error response 00:23:50.831 response: 00:23:50.831 { 00:23:50.831 "code": -114, 00:23:50.831 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:50.831 } 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.831 07:22:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:50.831 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.831 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.831 request: 00:23:50.831 { 00:23:50.831 "name": "NVMe0", 00:23:50.831 "trtype": "tcp", 00:23:50.831 "traddr": "10.0.0.2", 00:23:50.831 "adrfam": "ipv4", 00:23:50.831 "trsvcid": "4420", 00:23:50.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.831 "hostaddr": "10.0.0.1", 00:23:50.831 "prchk_reftag": false, 00:23:50.831 "prchk_guard": false, 00:23:50.831 "hdgst": false, 00:23:50.831 "ddgst": false, 00:23:50.831 "multipath": "disable", 00:23:50.831 "allow_unrecognized_csi": false, 00:23:50.831 "method": "bdev_nvme_attach_controller", 00:23:50.831 "req_id": 1 00:23:50.831 } 00:23:50.831 Got JSON-RPC error response 00:23:50.831 response: 00:23:50.831 { 00:23:50.831 "code": -114, 00:23:50.831 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:50.831 } 00:23:50.831 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:50.831 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:50.831 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:50.831 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:50.831 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:50.831 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:50.831 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:50.831 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:50.831 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:50.831 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 request: 00:23:50.832 { 00:23:50.832 "name": "NVMe0", 00:23:50.832 "trtype": "tcp", 00:23:50.832 "traddr": "10.0.0.2", 00:23:50.832 "adrfam": "ipv4", 00:23:50.832 "trsvcid": "4420", 00:23:50.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.832 "hostaddr": "10.0.0.1", 00:23:50.832 "prchk_reftag": false, 00:23:50.832 "prchk_guard": false, 00:23:50.832 "hdgst": false, 00:23:50.832 "ddgst": false, 00:23:50.832 "multipath": "failover", 00:23:50.832 "allow_unrecognized_csi": false, 00:23:50.832 "method": "bdev_nvme_attach_controller", 00:23:50.832 "req_id": 1 00:23:50.832 } 00:23:50.832 Got JSON-RPC error response 00:23:50.832 response: 00:23:50.832 { 00:23:50.832 "code": -114, 00:23:50.832 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:50.832 } 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 NVMe0n1 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.832 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.093 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.093 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:51.093 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.093 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.093 00:23:51.093 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.093 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:51.093 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:51.093 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.093 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.093 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.093 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:51.093 07:22:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:52.476 { 00:23:52.476 "results": [ 00:23:52.476 { 00:23:52.476 "job": "NVMe0n1", 00:23:52.476 "core_mask": "0x1", 00:23:52.476 "workload": "write", 00:23:52.476 "status": "finished", 00:23:52.476 "queue_depth": 128, 00:23:52.476 "io_size": 4096, 00:23:52.476 "runtime": 1.005559, 00:23:52.476 "iops": 25858.253966201883, 00:23:52.476 "mibps": 101.0088045554761, 00:23:52.476 "io_failed": 0, 00:23:52.476 "io_timeout": 0, 00:23:52.476 "avg_latency_us": 4939.125101146065, 00:23:52.476 "min_latency_us": 2102.6133333333332, 00:23:52.476 "max_latency_us": 17585.493333333332 00:23:52.476 } 00:23:52.476 ], 00:23:52.476 "core_count": 1 00:23:52.476 } 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3599795 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3599795 ']' 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3599795 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3599795 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3599795' 00:23:52.476 killing process with pid 3599795 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3599795 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3599795 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:52.476 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:52.476 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:52.476 [2024-11-20 07:22:11.977664] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:23:52.476 [2024-11-20 07:22:11.977737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599795 ] 00:23:52.476 [2024-11-20 07:22:12.069190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.476 [2024-11-20 07:22:12.123746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.476 [2024-11-20 07:22:13.188979] bdev.c:4688:bdev_name_add: *ERROR*: Bdev name effdfc3e-4ff9-4669-9cf2-22c38d78d0b9 already exists 00:23:52.476 [2024-11-20 07:22:13.189025] bdev.c:7898:bdev_register: *ERROR*: Unable to add uuid:effdfc3e-4ff9-4669-9cf2-22c38d78d0b9 alias for bdev NVMe1n1 00:23:52.476 [2024-11-20 07:22:13.189035] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:52.476 Running I/O for 1 seconds... 00:23:52.476 25809.00 IOPS, 100.82 MiB/s 00:23:52.476 Latency(us) 00:23:52.476 [2024-11-20T06:22:14.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.476 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:52.477 NVMe0n1 : 1.01 25858.25 101.01 0.00 0.00 4939.13 2102.61 17585.49 00:23:52.477 [2024-11-20T06:22:14.755Z] =================================================================================================================== 00:23:52.477 [2024-11-20T06:22:14.755Z] Total : 25858.25 101.01 0.00 0.00 4939.13 2102.61 17585.49 00:23:52.477 Received shutdown signal, test time was about 1.000000 seconds 00:23:52.477 00:23:52.477 Latency(us) 00:23:52.477 [2024-11-20T06:22:14.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.477 [2024-11-20T06:22:14.755Z] =================================================================================================================== 00:23:52.477 [2024-11-20T06:22:14.755Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.477 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:52.477 rmmod nvme_tcp 00:23:52.477 rmmod nvme_fabrics 00:23:52.477 rmmod nvme_keyring 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3599576 ']' 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3599576 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3599576 ']' 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3599576 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3599576 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3599576' 00:23:52.477 killing process with pid 3599576 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3599576 00:23:52.477 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3599576 00:23:52.736 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:52.736 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:52.736 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:52.736 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:52.736 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:52.736 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:52.736 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:52.736 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.736 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.736 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.736 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.736 07:22:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.647 07:22:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.908 00:23:54.908 real 0m13.911s 00:23:54.908 user 0m16.602s 00:23:54.908 sys 0m6.568s 00:23:54.908 07:22:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:54.908 07:22:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.908 ************************************ 00:23:54.908 END TEST nvmf_multicontroller 00:23:54.908 ************************************ 00:23:54.908 07:22:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:54.908 07:22:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:54.908 07:22:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:54.908 07:22:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.908 ************************************ 00:23:54.908 START TEST nvmf_aer 00:23:54.908 ************************************ 00:23:54.908 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:54.908 * Looking for test storage... 00:23:54.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.908 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:54.908 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:54.908 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:55.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.169 --rc genhtml_branch_coverage=1 00:23:55.169 --rc genhtml_function_coverage=1 00:23:55.169 --rc genhtml_legend=1 00:23:55.169 --rc geninfo_all_blocks=1 00:23:55.169 --rc geninfo_unexecuted_blocks=1 00:23:55.169 00:23:55.169 ' 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:55.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.169 --rc genhtml_branch_coverage=1 00:23:55.169 --rc genhtml_function_coverage=1 00:23:55.169 --rc genhtml_legend=1 00:23:55.169 --rc geninfo_all_blocks=1 00:23:55.169 --rc geninfo_unexecuted_blocks=1 00:23:55.169 00:23:55.169 ' 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:55.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.169 --rc genhtml_branch_coverage=1 00:23:55.169 --rc genhtml_function_coverage=1 00:23:55.169 --rc genhtml_legend=1 00:23:55.169 --rc geninfo_all_blocks=1 00:23:55.169 --rc geninfo_unexecuted_blocks=1 00:23:55.169 00:23:55.169 ' 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:55.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.169 --rc genhtml_branch_coverage=1 00:23:55.169 --rc genhtml_function_coverage=1 00:23:55.169 --rc genhtml_legend=1 00:23:55.169 --rc geninfo_all_blocks=1 00:23:55.169 --rc geninfo_unexecuted_blocks=1 00:23:55.169 00:23:55.169 ' 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:55.169 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:55.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:55.170 07:22:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.311 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:03.312 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:03.312 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:03.312 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:03.312 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:24:03.312 00:24:03.312 --- 10.0.0.2 ping statistics --- 00:24:03.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.312 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:24:03.312 00:24:03.312 --- 10.0.0.1 ping statistics --- 00:24:03.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.312 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.312 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.313 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.313 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:03.313 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.313 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:03.313 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.313 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3604590 00:24:03.313 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3604590 00:24:03.313 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:03.313 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 3604590 ']' 00:24:03.313 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.313 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:03.313 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.313 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:03.313 07:22:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.313 [2024-11-20 07:22:24.809285] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:24:03.313 [2024-11-20 07:22:24.809351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.313 [2024-11-20 07:22:24.908507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.313 [2024-11-20 07:22:24.962296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.313 [2024-11-20 07:22:24.962346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.313 [2024-11-20 07:22:24.962355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.313 [2024-11-20 07:22:24.962362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.313 [2024-11-20 07:22:24.962368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.313 [2024-11-20 07:22:24.964749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.313 [2024-11-20 07:22:24.964916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.313 [2024-11-20 07:22:24.965077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.313 [2024-11-20 07:22:24.965078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.574 [2024-11-20 07:22:25.692894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.574 Malloc0 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.574 [2024-11-20 07:22:25.768206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.574 [ 00:24:03.574 { 00:24:03.574 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:03.574 "subtype": "Discovery", 00:24:03.574 "listen_addresses": [], 00:24:03.574 "allow_any_host": true, 00:24:03.574 "hosts": [] 00:24:03.574 }, 00:24:03.574 { 00:24:03.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.574 "subtype": "NVMe", 00:24:03.574 "listen_addresses": [ 00:24:03.574 { 00:24:03.574 "trtype": "TCP", 00:24:03.574 "adrfam": "IPv4", 00:24:03.574 "traddr": "10.0.0.2", 00:24:03.574 "trsvcid": "4420" 00:24:03.574 } 00:24:03.574 ], 00:24:03.574 "allow_any_host": true, 00:24:03.574 "hosts": [], 00:24:03.574 "serial_number": "SPDK00000000000001", 00:24:03.574 "model_number": "SPDK bdev Controller", 00:24:03.574 "max_namespaces": 2, 00:24:03.574 "min_cntlid": 1, 00:24:03.574 "max_cntlid": 65519, 00:24:03.574 "namespaces": [ 00:24:03.574 { 00:24:03.574 "nsid": 1, 00:24:03.574 "bdev_name": "Malloc0", 00:24:03.574 "name": "Malloc0", 00:24:03.574 "nguid": "4E81106C54464BA39DA73BBC87350F64", 00:24:03.574 "uuid": "4e81106c-5446-4ba3-9da7-3bbc87350f64" 00:24:03.574 } 00:24:03.574 ] 00:24:03.574 } 00:24:03.574 ] 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:03.574 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:03.575 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3604655 00:24:03.575 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:03.575 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:03.575 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:24:03.575 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:03.575 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:24:03.575 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:24:03.575 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:24:03.836 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:03.836 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:24:03.836 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:24:03.836 07:22:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:24:03.836 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:03.836 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:24:03.836 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:24:03.836 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.098 Malloc1 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.098 Asynchronous Event Request test 00:24:04.098 Attaching to 10.0.0.2 00:24:04.098 Attached to 10.0.0.2 00:24:04.098 Registering asynchronous event callbacks... 00:24:04.098 Starting namespace attribute notice tests for all controllers... 00:24:04.098 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:04.098 aer_cb - Changed Namespace 00:24:04.098 Cleaning up... 00:24:04.098 [ 00:24:04.098 { 00:24:04.098 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:04.098 "subtype": "Discovery", 00:24:04.098 "listen_addresses": [], 00:24:04.098 "allow_any_host": true, 00:24:04.098 "hosts": [] 00:24:04.098 }, 00:24:04.098 { 00:24:04.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.098 "subtype": "NVMe", 00:24:04.098 "listen_addresses": [ 00:24:04.098 { 00:24:04.098 "trtype": "TCP", 00:24:04.098 "adrfam": "IPv4", 00:24:04.098 "traddr": "10.0.0.2", 00:24:04.098 "trsvcid": "4420" 00:24:04.098 } 00:24:04.098 ], 00:24:04.098 "allow_any_host": true, 00:24:04.098 "hosts": [], 00:24:04.098 "serial_number": "SPDK00000000000001", 00:24:04.098 "model_number": "SPDK bdev Controller", 00:24:04.098 "max_namespaces": 2, 00:24:04.098 "min_cntlid": 1, 00:24:04.098 "max_cntlid": 65519, 00:24:04.098 "namespaces": [ 00:24:04.098 { 00:24:04.098 "nsid": 1, 00:24:04.098 "bdev_name": "Malloc0", 00:24:04.098 "name": "Malloc0", 00:24:04.098 "nguid": "4E81106C54464BA39DA73BBC87350F64", 00:24:04.098 "uuid": "4e81106c-5446-4ba3-9da7-3bbc87350f64" 00:24:04.098 }, 00:24:04.098 { 00:24:04.098 "nsid": 2, 00:24:04.098 "bdev_name": "Malloc1", 00:24:04.098 "name": "Malloc1", 00:24:04.098 "nguid": "80674666D43B4F5BBF0E581200F8C1D7", 00:24:04.098 "uuid": "80674666-d43b-4f5b-bf0e-581200f8c1d7" 00:24:04.098 } 00:24:04.098 ] 00:24:04.098 } 00:24:04.098 ] 00:24:04.098 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3604655 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:04.099 rmmod nvme_tcp 00:24:04.099 rmmod nvme_fabrics 00:24:04.099 rmmod nvme_keyring 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3604590 ']' 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3604590 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 3604590 ']' 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 3604590 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:04.099 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3604590 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3604590' 00:24:04.360 killing process with pid 3604590 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 3604590 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 3604590 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.360 07:22:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.907 00:24:06.907 real 0m11.648s 00:24:06.907 user 0m8.580s 00:24:06.907 sys 0m6.264s 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:06.907 ************************************ 00:24:06.907 END TEST nvmf_aer 00:24:06.907 ************************************ 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.907 ************************************ 00:24:06.907 START TEST nvmf_async_init 00:24:06.907 ************************************ 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:06.907 * Looking for test storage... 00:24:06.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:06.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.907 --rc genhtml_branch_coverage=1 00:24:06.907 --rc genhtml_function_coverage=1 00:24:06.907 --rc genhtml_legend=1 00:24:06.907 --rc geninfo_all_blocks=1 00:24:06.907 --rc geninfo_unexecuted_blocks=1 00:24:06.907 00:24:06.907 ' 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:06.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.907 --rc genhtml_branch_coverage=1 00:24:06.907 --rc genhtml_function_coverage=1 00:24:06.907 --rc genhtml_legend=1 00:24:06.907 --rc geninfo_all_blocks=1 00:24:06.907 --rc geninfo_unexecuted_blocks=1 00:24:06.907 00:24:06.907 ' 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:06.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.907 --rc genhtml_branch_coverage=1 00:24:06.907 --rc genhtml_function_coverage=1 00:24:06.907 --rc genhtml_legend=1 00:24:06.907 --rc geninfo_all_blocks=1 00:24:06.907 --rc geninfo_unexecuted_blocks=1 00:24:06.907 00:24:06.907 ' 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:06.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.907 --rc genhtml_branch_coverage=1 00:24:06.907 --rc genhtml_function_coverage=1 00:24:06.907 --rc genhtml_legend=1 00:24:06.907 --rc geninfo_all_blocks=1 00:24:06.907 --rc geninfo_unexecuted_blocks=1 00:24:06.907 00:24:06.907 ' 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:06.907 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=da585d7440fb48039aeb4ecc9bbaac46 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.908 07:22:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:15.052 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:15.052 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:15.053 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:15.053 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:15.053 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:15.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:24:15.053 00:24:15.053 --- 10.0.0.2 ping statistics --- 00:24:15.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.053 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:24:15.053 00:24:15.053 --- 10.0.0.1 ping statistics --- 00:24:15.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.053 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3608982 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3608982 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 3608982 ']' 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:15.053 07:22:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.053 [2024-11-20 07:22:36.555256] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:24:15.053 [2024-11-20 07:22:36.555323] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.053 [2024-11-20 07:22:36.655525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.053 [2024-11-20 07:22:36.706847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.053 [2024-11-20 07:22:36.706901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.053 [2024-11-20 07:22:36.706910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.053 [2024-11-20 07:22:36.706917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.053 [2024-11-20 07:22:36.706923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.053 [2024-11-20 07:22:36.707666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.314 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:15.314 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:24:15.314 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:15.314 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.315 [2024-11-20 07:22:37.409652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.315 null0 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g da585d7440fb48039aeb4ecc9bbaac46 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.315 [2024-11-20 07:22:37.470014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.315 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.577 nvme0n1 00:24:15.577 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.577 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:15.577 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.577 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.577 [ 00:24:15.577 { 00:24:15.577 "name": "nvme0n1", 00:24:15.577 "aliases": [ 00:24:15.577 "da585d74-40fb-4803-9aeb-4ecc9bbaac46" 00:24:15.577 ], 00:24:15.577 "product_name": "NVMe disk", 00:24:15.577 "block_size": 512, 00:24:15.577 "num_blocks": 2097152, 00:24:15.577 "uuid": "da585d74-40fb-4803-9aeb-4ecc9bbaac46", 00:24:15.577 "numa_id": 0, 00:24:15.577 "assigned_rate_limits": { 00:24:15.577 "rw_ios_per_sec": 0, 00:24:15.577 "rw_mbytes_per_sec": 0, 00:24:15.577 "r_mbytes_per_sec": 0, 00:24:15.577 "w_mbytes_per_sec": 0 00:24:15.577 }, 00:24:15.577 "claimed": false, 00:24:15.577 "zoned": false, 00:24:15.577 "supported_io_types": { 00:24:15.577 "read": true, 00:24:15.577 "write": true, 00:24:15.577 "unmap": false, 00:24:15.577 "flush": true, 00:24:15.577 "reset": true, 00:24:15.577 "nvme_admin": true, 00:24:15.577 "nvme_io": true, 00:24:15.577 "nvme_io_md": false, 00:24:15.577 "write_zeroes": true, 00:24:15.577 "zcopy": false, 00:24:15.577 "get_zone_info": false, 00:24:15.577 "zone_management": false, 00:24:15.577 "zone_append": false, 00:24:15.577 "compare": true, 00:24:15.577 "compare_and_write": true, 00:24:15.577 "abort": true, 00:24:15.577 "seek_hole": false, 00:24:15.577 "seek_data": false, 00:24:15.577 "copy": true, 00:24:15.577 "nvme_iov_md": false 00:24:15.577 }, 00:24:15.577 "memory_domains": [ 00:24:15.577 { 00:24:15.577 "dma_device_id": "system", 00:24:15.577 "dma_device_type": 1 00:24:15.577 } 00:24:15.577 ], 00:24:15.577 "driver_specific": { 00:24:15.577 "nvme": [ 00:24:15.577 { 00:24:15.577 "trid": { 00:24:15.577 "trtype": "TCP", 00:24:15.577 "adrfam": "IPv4", 00:24:15.577 "traddr": "10.0.0.2", 00:24:15.577 "trsvcid": "4420", 00:24:15.577 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:15.577 }, 00:24:15.577 "ctrlr_data": { 00:24:15.577 "cntlid": 1, 00:24:15.577 "vendor_id": "0x8086", 00:24:15.577 "model_number": "SPDK bdev Controller", 00:24:15.577 "serial_number": "00000000000000000000", 00:24:15.577 "firmware_revision": "25.01", 00:24:15.577 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:15.577 "oacs": { 00:24:15.577 "security": 0, 00:24:15.577 "format": 0, 00:24:15.577 "firmware": 0, 00:24:15.577 "ns_manage": 0 00:24:15.577 }, 00:24:15.577 "multi_ctrlr": true, 00:24:15.577 "ana_reporting": false 00:24:15.577 }, 00:24:15.577 "vs": { 00:24:15.577 "nvme_version": "1.3" 00:24:15.577 }, 00:24:15.577 "ns_data": { 00:24:15.577 "id": 1, 00:24:15.577 "can_share": true 00:24:15.577 } 00:24:15.577 } 00:24:15.577 ], 00:24:15.577 "mp_policy": "active_passive" 00:24:15.577 } 00:24:15.577 } 00:24:15.577 ] 00:24:15.577 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.577 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:15.577 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.577 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.577 [2024-11-20 07:22:37.746493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:15.577 [2024-11-20 07:22:37.746579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1317ce0 (9): Bad file descriptor 00:24:15.839 [2024-11-20 07:22:37.878266] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.839 [ 00:24:15.839 { 00:24:15.839 "name": "nvme0n1", 00:24:15.839 "aliases": [ 00:24:15.839 "da585d74-40fb-4803-9aeb-4ecc9bbaac46" 00:24:15.839 ], 00:24:15.839 "product_name": "NVMe disk", 00:24:15.839 "block_size": 512, 00:24:15.839 "num_blocks": 2097152, 00:24:15.839 "uuid": "da585d74-40fb-4803-9aeb-4ecc9bbaac46", 00:24:15.839 "numa_id": 0, 00:24:15.839 "assigned_rate_limits": { 00:24:15.839 "rw_ios_per_sec": 0, 00:24:15.839 "rw_mbytes_per_sec": 0, 00:24:15.839 "r_mbytes_per_sec": 0, 00:24:15.839 "w_mbytes_per_sec": 0 00:24:15.839 }, 00:24:15.839 "claimed": false, 00:24:15.839 "zoned": false, 00:24:15.839 "supported_io_types": { 00:24:15.839 "read": true, 00:24:15.839 "write": true, 00:24:15.839 "unmap": false, 00:24:15.839 "flush": true, 00:24:15.839 "reset": true, 00:24:15.839 "nvme_admin": true, 00:24:15.839 "nvme_io": true, 00:24:15.839 "nvme_io_md": false, 00:24:15.839 "write_zeroes": true, 00:24:15.839 "zcopy": false, 00:24:15.839 "get_zone_info": false, 00:24:15.839 "zone_management": false, 00:24:15.839 "zone_append": false, 00:24:15.839 "compare": true, 00:24:15.839 "compare_and_write": true, 00:24:15.839 "abort": true, 00:24:15.839 "seek_hole": false, 00:24:15.839 "seek_data": false, 00:24:15.839 "copy": true, 00:24:15.839 "nvme_iov_md": false 00:24:15.839 }, 00:24:15.839 "memory_domains": [ 00:24:15.839 { 00:24:15.839 "dma_device_id": "system", 00:24:15.839 "dma_device_type": 1 00:24:15.839 } 00:24:15.839 ], 00:24:15.839 "driver_specific": { 00:24:15.839 "nvme": [ 00:24:15.839 { 00:24:15.839 "trid": { 00:24:15.839 "trtype": "TCP", 00:24:15.839 "adrfam": "IPv4", 00:24:15.839 "traddr": "10.0.0.2", 00:24:15.839 "trsvcid": "4420", 00:24:15.839 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:15.839 }, 00:24:15.839 "ctrlr_data": { 00:24:15.839 "cntlid": 2, 00:24:15.839 "vendor_id": "0x8086", 00:24:15.839 "model_number": "SPDK bdev Controller", 00:24:15.839 "serial_number": "00000000000000000000", 00:24:15.839 "firmware_revision": "25.01", 00:24:15.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:15.839 "oacs": { 00:24:15.839 "security": 0, 00:24:15.839 "format": 0, 00:24:15.839 "firmware": 0, 00:24:15.839 "ns_manage": 0 00:24:15.839 }, 00:24:15.839 "multi_ctrlr": true, 00:24:15.839 "ana_reporting": false 00:24:15.839 }, 00:24:15.839 "vs": { 00:24:15.839 "nvme_version": "1.3" 00:24:15.839 }, 00:24:15.839 "ns_data": { 00:24:15.839 "id": 1, 00:24:15.839 "can_share": true 00:24:15.839 } 00:24:15.839 } 00:24:15.839 ], 00:24:15.839 "mp_policy": "active_passive" 00:24:15.839 } 00:24:15.839 } 00:24:15.839 ] 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6GRSF9s83Z 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6GRSF9s83Z 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.6GRSF9s83Z 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.839 [2024-11-20 07:22:37.967184] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:15.839 [2024-11-20 07:22:37.967346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.839 [2024-11-20 07:22:37.991260] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:15.839 nvme0n1 00:24:15.839 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.839 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:15.839 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.839 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.839 [ 00:24:15.839 { 00:24:15.839 "name": "nvme0n1", 00:24:15.839 "aliases": [ 00:24:15.839 "da585d74-40fb-4803-9aeb-4ecc9bbaac46" 00:24:15.839 ], 00:24:15.839 "product_name": "NVMe disk", 00:24:15.839 "block_size": 512, 00:24:15.839 "num_blocks": 2097152, 00:24:15.839 "uuid": "da585d74-40fb-4803-9aeb-4ecc9bbaac46", 00:24:15.839 "numa_id": 0, 00:24:15.839 "assigned_rate_limits": { 00:24:15.839 "rw_ios_per_sec": 0, 00:24:15.839 "rw_mbytes_per_sec": 0, 00:24:15.839 "r_mbytes_per_sec": 0, 00:24:15.839 "w_mbytes_per_sec": 0 00:24:15.839 }, 00:24:15.839 "claimed": false, 00:24:15.839 "zoned": false, 00:24:15.839 "supported_io_types": { 00:24:15.839 "read": true, 00:24:15.839 "write": true, 00:24:15.839 "unmap": false, 00:24:15.839 "flush": true, 00:24:15.839 "reset": true, 00:24:15.839 "nvme_admin": true, 00:24:15.839 "nvme_io": true, 00:24:15.839 "nvme_io_md": false, 00:24:15.839 "write_zeroes": true, 00:24:15.839 "zcopy": false, 00:24:15.839 "get_zone_info": false, 00:24:15.839 "zone_management": false, 00:24:15.839 "zone_append": false, 00:24:15.839 "compare": true, 00:24:15.839 "compare_and_write": true, 00:24:15.839 "abort": true, 00:24:15.839 "seek_hole": false, 00:24:15.839 "seek_data": false, 00:24:15.839 "copy": true, 00:24:15.839 "nvme_iov_md": false 00:24:15.839 }, 00:24:15.839 "memory_domains": [ 00:24:15.839 { 00:24:15.839 "dma_device_id": "system", 00:24:15.839 "dma_device_type": 1 00:24:15.839 } 00:24:15.839 ], 00:24:15.839 "driver_specific": { 00:24:15.839 "nvme": [ 00:24:15.840 { 00:24:15.840 "trid": { 00:24:15.840 "trtype": "TCP", 00:24:15.840 "adrfam": "IPv4", 00:24:15.840 "traddr": "10.0.0.2", 00:24:15.840 "trsvcid": "4421", 00:24:15.840 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:15.840 }, 00:24:15.840 "ctrlr_data": { 00:24:15.840 "cntlid": 3, 00:24:15.840 "vendor_id": "0x8086", 00:24:15.840 "model_number": "SPDK bdev Controller", 00:24:15.840 "serial_number": "00000000000000000000", 00:24:15.840 "firmware_revision": "25.01", 00:24:15.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:15.840 "oacs": { 00:24:15.840 "security": 0, 00:24:15.840 "format": 0, 00:24:15.840 "firmware": 0, 00:24:15.840 "ns_manage": 0 00:24:15.840 }, 00:24:15.840 "multi_ctrlr": true, 00:24:15.840 "ana_reporting": false 00:24:15.840 }, 00:24:15.840 "vs": { 00:24:15.840 "nvme_version": "1.3" 00:24:15.840 }, 00:24:15.840 "ns_data": { 00:24:15.840 "id": 1, 00:24:15.840 "can_share": true 00:24:15.840 } 00:24:15.840 } 00:24:15.840 ], 00:24:15.840 "mp_policy": "active_passive" 00:24:15.840 } 00:24:15.840 } 00:24:15.840 ] 00:24:15.840 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.840 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.840 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.840 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.840 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.840 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.6GRSF9s83Z 00:24:15.840 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:15.840 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:15.840 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:15.840 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:15.840 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:15.840 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:15.840 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:15.840 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.100 rmmod nvme_tcp 00:24:16.100 rmmod nvme_fabrics 00:24:16.100 rmmod nvme_keyring 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3608982 ']' 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3608982 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 3608982 ']' 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 3608982 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3608982 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3608982' 00:24:16.100 killing process with pid 3608982 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 3608982 00:24:16.100 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 3608982 00:24:16.362 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:16.362 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:16.362 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:16.362 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:16.362 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:16.362 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:16.362 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:16.362 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.362 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:16.362 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.362 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.362 07:22:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.276 07:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:18.276 00:24:18.276 real 0m11.736s 00:24:18.276 user 0m4.225s 00:24:18.276 sys 0m6.107s 00:24:18.276 07:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:18.276 07:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:18.276 ************************************ 00:24:18.276 END TEST nvmf_async_init 00:24:18.276 ************************************ 00:24:18.276 07:22:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:18.276 07:22:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:18.276 07:22:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:18.276 07:22:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.538 ************************************ 00:24:18.538 START TEST dma 00:24:18.538 ************************************ 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:18.538 * Looking for test storage... 00:24:18.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:18.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.538 --rc genhtml_branch_coverage=1 00:24:18.538 --rc genhtml_function_coverage=1 00:24:18.538 --rc genhtml_legend=1 00:24:18.538 --rc geninfo_all_blocks=1 00:24:18.538 --rc geninfo_unexecuted_blocks=1 00:24:18.538 00:24:18.538 ' 00:24:18.538 07:22:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:18.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.538 --rc genhtml_branch_coverage=1 00:24:18.538 --rc genhtml_function_coverage=1 00:24:18.539 --rc genhtml_legend=1 00:24:18.539 --rc geninfo_all_blocks=1 00:24:18.539 --rc geninfo_unexecuted_blocks=1 00:24:18.539 00:24:18.539 ' 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:18.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.539 --rc genhtml_branch_coverage=1 00:24:18.539 --rc genhtml_function_coverage=1 00:24:18.539 --rc genhtml_legend=1 00:24:18.539 --rc geninfo_all_blocks=1 00:24:18.539 --rc geninfo_unexecuted_blocks=1 00:24:18.539 00:24:18.539 ' 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:18.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.539 --rc genhtml_branch_coverage=1 00:24:18.539 --rc genhtml_function_coverage=1 00:24:18.539 --rc genhtml_legend=1 00:24:18.539 --rc geninfo_all_blocks=1 00:24:18.539 --rc geninfo_unexecuted_blocks=1 00:24:18.539 00:24:18.539 ' 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:18.539 00:24:18.539 real 0m0.239s 00:24:18.539 user 0m0.141s 00:24:18.539 sys 0m0.115s 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:18.539 07:22:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:18.539 ************************************ 00:24:18.539 END TEST dma 00:24:18.539 ************************************ 00:24:18.801 07:22:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:18.801 07:22:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:18.801 07:22:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:18.801 07:22:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.801 ************************************ 00:24:18.801 START TEST nvmf_identify 00:24:18.801 ************************************ 00:24:18.801 07:22:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:18.801 * Looking for test storage... 00:24:18.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.801 07:22:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:18.801 07:22:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:24:18.801 07:22:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:18.801 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:19.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.062 --rc genhtml_branch_coverage=1 00:24:19.062 --rc genhtml_function_coverage=1 00:24:19.062 --rc genhtml_legend=1 00:24:19.062 --rc geninfo_all_blocks=1 00:24:19.062 --rc geninfo_unexecuted_blocks=1 00:24:19.062 00:24:19.062 ' 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:19.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.062 --rc genhtml_branch_coverage=1 00:24:19.062 --rc genhtml_function_coverage=1 00:24:19.062 --rc genhtml_legend=1 00:24:19.062 --rc geninfo_all_blocks=1 00:24:19.062 --rc geninfo_unexecuted_blocks=1 00:24:19.062 00:24:19.062 ' 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:19.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.062 --rc genhtml_branch_coverage=1 00:24:19.062 --rc genhtml_function_coverage=1 00:24:19.062 --rc genhtml_legend=1 00:24:19.062 --rc geninfo_all_blocks=1 00:24:19.062 --rc geninfo_unexecuted_blocks=1 00:24:19.062 00:24:19.062 ' 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:19.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.062 --rc genhtml_branch_coverage=1 00:24:19.062 --rc genhtml_function_coverage=1 00:24:19.062 --rc genhtml_legend=1 00:24:19.062 --rc geninfo_all_blocks=1 00:24:19.062 --rc geninfo_unexecuted_blocks=1 00:24:19.062 00:24:19.062 ' 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.062 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:19.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:19.063 07:22:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:27.202 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.202 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:27.203 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:27.203 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:27.203 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:27.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:24:27.203 00:24:27.203 --- 10.0.0.2 ping statistics --- 00:24:27.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.203 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:24:27.203 00:24:27.203 --- 10.0.0.1 ping statistics --- 00:24:27.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.203 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3613714 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3613714 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 3613714 ']' 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:27.203 07:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.203 [2024-11-20 07:22:48.697370] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:24:27.203 [2024-11-20 07:22:48.697439] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.203 [2024-11-20 07:22:48.802088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:27.203 [2024-11-20 07:22:48.855857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.203 [2024-11-20 07:22:48.855914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.203 [2024-11-20 07:22:48.855923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.203 [2024-11-20 07:22:48.855930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.203 [2024-11-20 07:22:48.855937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.203 [2024-11-20 07:22:48.858056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.203 [2024-11-20 07:22:48.858216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.203 [2024-11-20 07:22:48.858329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.203 [2024-11-20 07:22:48.858331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.465 [2024-11-20 07:22:49.538317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.465 Malloc0 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.465 [2024-11-20 07:22:49.663424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.465 [ 00:24:27.465 { 00:24:27.465 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:27.465 "subtype": "Discovery", 00:24:27.465 "listen_addresses": [ 00:24:27.465 { 00:24:27.465 "trtype": "TCP", 00:24:27.465 "adrfam": "IPv4", 00:24:27.465 "traddr": "10.0.0.2", 00:24:27.465 "trsvcid": "4420" 00:24:27.465 } 00:24:27.465 ], 00:24:27.465 "allow_any_host": true, 00:24:27.465 "hosts": [] 00:24:27.465 }, 00:24:27.465 { 00:24:27.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.465 "subtype": "NVMe", 00:24:27.465 "listen_addresses": [ 00:24:27.465 { 00:24:27.465 "trtype": "TCP", 00:24:27.465 "adrfam": "IPv4", 00:24:27.465 "traddr": "10.0.0.2", 00:24:27.465 "trsvcid": "4420" 00:24:27.465 } 00:24:27.465 ], 00:24:27.465 "allow_any_host": true, 00:24:27.465 "hosts": [], 00:24:27.465 "serial_number": "SPDK00000000000001", 00:24:27.465 "model_number": "SPDK bdev Controller", 00:24:27.465 "max_namespaces": 32, 00:24:27.465 "min_cntlid": 1, 00:24:27.465 "max_cntlid": 65519, 00:24:27.465 "namespaces": [ 00:24:27.465 { 00:24:27.465 "nsid": 1, 00:24:27.465 "bdev_name": "Malloc0", 00:24:27.465 "name": "Malloc0", 00:24:27.465 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:27.465 "eui64": "ABCDEF0123456789", 00:24:27.465 "uuid": "ff3802a3-8412-47ad-a6fd-ed7968baa12f" 00:24:27.465 } 00:24:27.465 ] 00:24:27.465 } 00:24:27.465 ] 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.465 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:27.465 [2024-11-20 07:22:49.726350] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:24:27.465 [2024-11-20 07:22:49.726397] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613859 ] 00:24:27.730 [2024-11-20 07:22:49.781890] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:27.730 [2024-11-20 07:22:49.781956] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:27.730 [2024-11-20 07:22:49.781963] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:27.730 [2024-11-20 07:22:49.781982] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:27.730 [2024-11-20 07:22:49.781995] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:27.730 [2024-11-20 07:22:49.785681] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:27.730 [2024-11-20 07:22:49.785730] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9c2690 0 00:24:27.730 [2024-11-20 07:22:49.793175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:27.730 [2024-11-20 07:22:49.793193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:27.730 [2024-11-20 07:22:49.793198] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:27.730 [2024-11-20 07:22:49.793215] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:27.730 [2024-11-20 07:22:49.793266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.730 [2024-11-20 07:22:49.793272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.730 [2024-11-20 07:22:49.793277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c2690) 00:24:27.730 [2024-11-20 07:22:49.793293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:27.730 [2024-11-20 07:22:49.793319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24100, cid 0, qid 0 00:24:27.730 [2024-11-20 07:22:49.801180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.730 [2024-11-20 07:22:49.801192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.730 [2024-11-20 07:22:49.801196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.730 [2024-11-20 07:22:49.801201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24100) on tqpair=0x9c2690 00:24:27.730 [2024-11-20 07:22:49.801215] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:27.730 [2024-11-20 07:22:49.801224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:27.730 [2024-11-20 07:22:49.801230] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:27.730 [2024-11-20 07:22:49.801247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.730 [2024-11-20 07:22:49.801251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.730 [2024-11-20 07:22:49.801255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c2690) 00:24:27.730 [2024-11-20 07:22:49.801264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.730 [2024-11-20 07:22:49.801280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24100, cid 0, qid 0 00:24:27.730 [2024-11-20 07:22:49.801478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.730 [2024-11-20 07:22:49.801485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.730 [2024-11-20 07:22:49.801488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.730 [2024-11-20 07:22:49.801492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24100) on tqpair=0x9c2690 00:24:27.730 [2024-11-20 07:22:49.801499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:27.731 [2024-11-20 07:22:49.801506] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:27.731 [2024-11-20 07:22:49.801513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.801517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.801520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c2690) 00:24:27.731 [2024-11-20 07:22:49.801527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.731 [2024-11-20 07:22:49.801538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24100, cid 0, qid 0 00:24:27.731 [2024-11-20 07:22:49.801784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.731 [2024-11-20 07:22:49.801791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.731 [2024-11-20 07:22:49.801794] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.801798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24100) on tqpair=0x9c2690 00:24:27.731 [2024-11-20 07:22:49.801804] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:27.731 [2024-11-20 07:22:49.801813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:27.731 [2024-11-20 07:22:49.801825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.801828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.801832] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c2690) 00:24:27.731 [2024-11-20 07:22:49.801839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.731 [2024-11-20 07:22:49.801850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24100, cid 0, qid 0 00:24:27.731 [2024-11-20 07:22:49.802060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.731 [2024-11-20 07:22:49.802067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.731 [2024-11-20 07:22:49.802070] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.802074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24100) on tqpair=0x9c2690 00:24:27.731 [2024-11-20 07:22:49.802080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:27.731 [2024-11-20 07:22:49.802089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.802093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.802097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c2690) 00:24:27.731 [2024-11-20 07:22:49.802104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.731 [2024-11-20 07:22:49.802114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24100, cid 0, qid 0 00:24:27.731 [2024-11-20 07:22:49.802299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.731 [2024-11-20 07:22:49.802306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.731 [2024-11-20 07:22:49.802310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.802314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24100) on tqpair=0x9c2690 00:24:27.731 [2024-11-20 07:22:49.802319] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:27.731 [2024-11-20 07:22:49.802324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:27.731 [2024-11-20 07:22:49.802332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:27.731 [2024-11-20 07:22:49.802446] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:27.731 [2024-11-20 07:22:49.802450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:27.731 [2024-11-20 07:22:49.802459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.802463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.802467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c2690) 00:24:27.731 [2024-11-20 07:22:49.802474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.731 [2024-11-20 07:22:49.802484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24100, cid 0, qid 0 00:24:27.731 [2024-11-20 07:22:49.802669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.731 [2024-11-20 07:22:49.802675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.731 [2024-11-20 07:22:49.802679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.802682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24100) on tqpair=0x9c2690 00:24:27.731 [2024-11-20 07:22:49.802687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:27.731 [2024-11-20 07:22:49.802700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.802704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.802708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c2690) 00:24:27.731 [2024-11-20 07:22:49.802715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.731 [2024-11-20 07:22:49.802725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24100, cid 0, qid 0 00:24:27.731 [2024-11-20 07:22:49.802902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.731 [2024-11-20 07:22:49.802908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.731 [2024-11-20 07:22:49.802912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.802915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24100) on tqpair=0x9c2690 00:24:27.731 [2024-11-20 07:22:49.802920] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:27.731 [2024-11-20 07:22:49.802925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:27.731 [2024-11-20 07:22:49.802934] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:27.731 [2024-11-20 07:22:49.802942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:27.731 [2024-11-20 07:22:49.802952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.802955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c2690) 00:24:27.731 [2024-11-20 07:22:49.802962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.731 [2024-11-20 07:22:49.802973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24100, cid 0, qid 0 00:24:27.731 [2024-11-20 07:22:49.803217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.731 [2024-11-20 07:22:49.803225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.731 [2024-11-20 07:22:49.803229] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.803233] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9c2690): datao=0, datal=4096, cccid=0 00:24:27.731 [2024-11-20 07:22:49.803238] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa24100) on tqpair(0x9c2690): expected_datao=0, payload_size=4096 00:24:27.731 [2024-11-20 07:22:49.803243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.803258] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.803263] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.844360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.731 [2024-11-20 07:22:49.844372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.731 [2024-11-20 07:22:49.844376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.844381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24100) on tqpair=0x9c2690 00:24:27.731 [2024-11-20 07:22:49.844391] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:27.731 [2024-11-20 07:22:49.844396] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:27.731 [2024-11-20 07:22:49.844400] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:27.731 [2024-11-20 07:22:49.844414] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:27.731 [2024-11-20 07:22:49.844419] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:27.731 [2024-11-20 07:22:49.844424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:27.731 [2024-11-20 07:22:49.844435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:27.731 [2024-11-20 07:22:49.844442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.844446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.844450] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c2690) 00:24:27.731 [2024-11-20 07:22:49.844458] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:27.731 [2024-11-20 07:22:49.844471] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24100, cid 0, qid 0 00:24:27.731 [2024-11-20 07:22:49.844706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.731 [2024-11-20 07:22:49.844714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.731 [2024-11-20 07:22:49.844717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.844721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24100) on tqpair=0x9c2690 00:24:27.731 [2024-11-20 07:22:49.844729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.844733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.844736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9c2690) 00:24:27.731 [2024-11-20 07:22:49.844743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.731 [2024-11-20 07:22:49.844749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.731 [2024-11-20 07:22:49.844753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.844757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9c2690) 00:24:27.732 [2024-11-20 07:22:49.844763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.732 [2024-11-20 07:22:49.844770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.844773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.844777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9c2690) 00:24:27.732 [2024-11-20 07:22:49.844783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.732 [2024-11-20 07:22:49.844789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.844793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.844796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.732 [2024-11-20 07:22:49.844802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.732 [2024-11-20 07:22:49.844807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:27.732 [2024-11-20 07:22:49.844815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:27.732 [2024-11-20 07:22:49.844822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.844825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9c2690) 00:24:27.732 [2024-11-20 07:22:49.844835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.732 [2024-11-20 07:22:49.844847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24100, cid 0, qid 0 00:24:27.732 [2024-11-20 07:22:49.844852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24280, cid 1, qid 0 00:24:27.732 [2024-11-20 07:22:49.844857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24400, cid 2, qid 0 00:24:27.732 [2024-11-20 07:22:49.844862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.732 [2024-11-20 07:22:49.844867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24700, cid 4, qid 0 00:24:27.732 [2024-11-20 07:22:49.845113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.732 [2024-11-20 07:22:49.845120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.732 [2024-11-20 07:22:49.845124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.845128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24700) on tqpair=0x9c2690 00:24:27.732 [2024-11-20 07:22:49.845136] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:27.732 [2024-11-20 07:22:49.845141] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:27.732 [2024-11-20 07:22:49.845153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.845157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9c2690) 00:24:27.732 [2024-11-20 07:22:49.849174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.732 [2024-11-20 07:22:49.849187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24700, cid 4, qid 0 00:24:27.732 [2024-11-20 07:22:49.849386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.732 [2024-11-20 07:22:49.849393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.732 [2024-11-20 07:22:49.849396] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.849400] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9c2690): datao=0, datal=4096, cccid=4 00:24:27.732 [2024-11-20 07:22:49.849405] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa24700) on tqpair(0x9c2690): expected_datao=0, payload_size=4096 00:24:27.732 [2024-11-20 07:22:49.849409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.849426] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.849430] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.849612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.732 [2024-11-20 07:22:49.849621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.732 [2024-11-20 07:22:49.849624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.849629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24700) on tqpair=0x9c2690 00:24:27.732 [2024-11-20 07:22:49.849642] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:27.732 [2024-11-20 07:22:49.849672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.849676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9c2690) 00:24:27.732 [2024-11-20 07:22:49.849683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.732 [2024-11-20 07:22:49.849690] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.849694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.849700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9c2690) 00:24:27.732 [2024-11-20 07:22:49.849706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.732 [2024-11-20 07:22:49.849721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24700, cid 4, qid 0 00:24:27.732 [2024-11-20 07:22:49.849727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24880, cid 5, qid 0 00:24:27.732 [2024-11-20 07:22:49.849996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.732 [2024-11-20 07:22:49.850002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.732 [2024-11-20 07:22:49.850006] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.850009] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9c2690): datao=0, datal=1024, cccid=4 00:24:27.732 [2024-11-20 07:22:49.850014] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa24700) on tqpair(0x9c2690): expected_datao=0, payload_size=1024 00:24:27.732 [2024-11-20 07:22:49.850019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.850025] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.850029] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.850035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.732 [2024-11-20 07:22:49.850041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.732 [2024-11-20 07:22:49.850044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.850048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24880) on tqpair=0x9c2690 00:24:27.732 [2024-11-20 07:22:49.890383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.732 [2024-11-20 07:22:49.890396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.732 [2024-11-20 07:22:49.890400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.890404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24700) on tqpair=0x9c2690 00:24:27.732 [2024-11-20 07:22:49.890418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.890422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9c2690) 00:24:27.732 [2024-11-20 07:22:49.890430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.732 [2024-11-20 07:22:49.890447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24700, cid 4, qid 0 00:24:27.732 [2024-11-20 07:22:49.890716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.732 [2024-11-20 07:22:49.890723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.732 [2024-11-20 07:22:49.890726] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.890730] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9c2690): datao=0, datal=3072, cccid=4 00:24:27.732 [2024-11-20 07:22:49.890734] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa24700) on tqpair(0x9c2690): expected_datao=0, payload_size=3072 00:24:27.732 [2024-11-20 07:22:49.890739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.890746] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.890750] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.890903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.732 [2024-11-20 07:22:49.890909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.732 [2024-11-20 07:22:49.890912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.890916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24700) on tqpair=0x9c2690 00:24:27.732 [2024-11-20 07:22:49.890925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.890933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9c2690) 00:24:27.732 [2024-11-20 07:22:49.890939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.732 [2024-11-20 07:22:49.890954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24700, cid 4, qid 0 00:24:27.732 [2024-11-20 07:22:49.891207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.732 [2024-11-20 07:22:49.891214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.732 [2024-11-20 07:22:49.891217] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.891221] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9c2690): datao=0, datal=8, cccid=4 00:24:27.732 [2024-11-20 07:22:49.891225] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa24700) on tqpair(0x9c2690): expected_datao=0, payload_size=8 00:24:27.732 [2024-11-20 07:22:49.891230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.891236] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.891240] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.936177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.732 [2024-11-20 07:22:49.936188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.732 [2024-11-20 07:22:49.936191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.732 [2024-11-20 07:22:49.936195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24700) on tqpair=0x9c2690 00:24:27.732 ===================================================== 00:24:27.732 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:27.732 ===================================================== 00:24:27.732 Controller Capabilities/Features 00:24:27.733 ================================ 00:24:27.733 Vendor ID: 0000 00:24:27.733 Subsystem Vendor ID: 0000 00:24:27.733 Serial Number: .................... 00:24:27.733 Model Number: ........................................ 00:24:27.733 Firmware Version: 25.01 00:24:27.733 Recommended Arb Burst: 0 00:24:27.733 IEEE OUI Identifier: 00 00 00 00:24:27.733 Multi-path I/O 00:24:27.733 May have multiple subsystem ports: No 00:24:27.733 May have multiple controllers: No 00:24:27.733 Associated with SR-IOV VF: No 00:24:27.733 Max Data Transfer Size: 131072 00:24:27.733 Max Number of Namespaces: 0 00:24:27.733 Max Number of I/O Queues: 1024 00:24:27.733 NVMe Specification Version (VS): 1.3 00:24:27.733 NVMe Specification Version (Identify): 1.3 00:24:27.733 Maximum Queue Entries: 128 00:24:27.733 Contiguous Queues Required: Yes 00:24:27.733 Arbitration Mechanisms Supported 00:24:27.733 Weighted Round Robin: Not Supported 00:24:27.733 Vendor Specific: Not Supported 00:24:27.733 Reset Timeout: 15000 ms 00:24:27.733 Doorbell Stride: 4 bytes 00:24:27.733 NVM Subsystem Reset: Not Supported 00:24:27.733 Command Sets Supported 00:24:27.733 NVM Command Set: Supported 00:24:27.733 Boot Partition: Not Supported 00:24:27.733 Memory Page Size Minimum: 4096 bytes 00:24:27.733 Memory Page Size Maximum: 4096 bytes 00:24:27.733 Persistent Memory Region: Not Supported 00:24:27.733 Optional Asynchronous Events Supported 00:24:27.733 Namespace Attribute Notices: Not Supported 00:24:27.733 Firmware Activation Notices: Not Supported 00:24:27.733 ANA Change Notices: Not Supported 00:24:27.733 PLE Aggregate Log Change Notices: Not Supported 00:24:27.733 LBA Status Info Alert Notices: Not Supported 00:24:27.733 EGE Aggregate Log Change Notices: Not Supported 00:24:27.733 Normal NVM Subsystem Shutdown event: Not Supported 00:24:27.733 Zone Descriptor Change Notices: Not Supported 00:24:27.733 Discovery Log Change Notices: Supported 00:24:27.733 Controller Attributes 00:24:27.733 128-bit Host Identifier: Not Supported 00:24:27.733 Non-Operational Permissive Mode: Not Supported 00:24:27.733 NVM Sets: Not Supported 00:24:27.733 Read Recovery Levels: Not Supported 00:24:27.733 Endurance Groups: Not Supported 00:24:27.733 Predictable Latency Mode: Not Supported 00:24:27.733 Traffic Based Keep ALive: Not Supported 00:24:27.733 Namespace Granularity: Not Supported 00:24:27.733 SQ Associations: Not Supported 00:24:27.733 UUID List: Not Supported 00:24:27.733 Multi-Domain Subsystem: Not Supported 00:24:27.733 Fixed Capacity Management: Not Supported 00:24:27.733 Variable Capacity Management: Not Supported 00:24:27.733 Delete Endurance Group: Not Supported 00:24:27.733 Delete NVM Set: Not Supported 00:24:27.733 Extended LBA Formats Supported: Not Supported 00:24:27.733 Flexible Data Placement Supported: Not Supported 00:24:27.733 00:24:27.733 Controller Memory Buffer Support 00:24:27.733 ================================ 00:24:27.733 Supported: No 00:24:27.733 00:24:27.733 Persistent Memory Region Support 00:24:27.733 ================================ 00:24:27.733 Supported: No 00:24:27.733 00:24:27.733 Admin Command Set Attributes 00:24:27.733 ============================ 00:24:27.733 Security Send/Receive: Not Supported 00:24:27.733 Format NVM: Not Supported 00:24:27.733 Firmware Activate/Download: Not Supported 00:24:27.733 Namespace Management: Not Supported 00:24:27.733 Device Self-Test: Not Supported 00:24:27.733 Directives: Not Supported 00:24:27.733 NVMe-MI: Not Supported 00:24:27.733 Virtualization Management: Not Supported 00:24:27.733 Doorbell Buffer Config: Not Supported 00:24:27.733 Get LBA Status Capability: Not Supported 00:24:27.733 Command & Feature Lockdown Capability: Not Supported 00:24:27.733 Abort Command Limit: 1 00:24:27.733 Async Event Request Limit: 4 00:24:27.733 Number of Firmware Slots: N/A 00:24:27.733 Firmware Slot 1 Read-Only: N/A 00:24:27.733 Firmware Activation Without Reset: N/A 00:24:27.733 Multiple Update Detection Support: N/A 00:24:27.733 Firmware Update Granularity: No Information Provided 00:24:27.733 Per-Namespace SMART Log: No 00:24:27.733 Asymmetric Namespace Access Log Page: Not Supported 00:24:27.733 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:27.733 Command Effects Log Page: Not Supported 00:24:27.733 Get Log Page Extended Data: Supported 00:24:27.733 Telemetry Log Pages: Not Supported 00:24:27.733 Persistent Event Log Pages: Not Supported 00:24:27.733 Supported Log Pages Log Page: May Support 00:24:27.733 Commands Supported & Effects Log Page: Not Supported 00:24:27.733 Feature Identifiers & Effects Log Page:May Support 00:24:27.733 NVMe-MI Commands & Effects Log Page: May Support 00:24:27.733 Data Area 4 for Telemetry Log: Not Supported 00:24:27.733 Error Log Page Entries Supported: 128 00:24:27.733 Keep Alive: Not Supported 00:24:27.733 00:24:27.733 NVM Command Set Attributes 00:24:27.733 ========================== 00:24:27.733 Submission Queue Entry Size 00:24:27.733 Max: 1 00:24:27.733 Min: 1 00:24:27.733 Completion Queue Entry Size 00:24:27.733 Max: 1 00:24:27.733 Min: 1 00:24:27.733 Number of Namespaces: 0 00:24:27.733 Compare Command: Not Supported 00:24:27.733 Write Uncorrectable Command: Not Supported 00:24:27.733 Dataset Management Command: Not Supported 00:24:27.733 Write Zeroes Command: Not Supported 00:24:27.733 Set Features Save Field: Not Supported 00:24:27.733 Reservations: Not Supported 00:24:27.733 Timestamp: Not Supported 00:24:27.733 Copy: Not Supported 00:24:27.733 Volatile Write Cache: Not Present 00:24:27.733 Atomic Write Unit (Normal): 1 00:24:27.733 Atomic Write Unit (PFail): 1 00:24:27.733 Atomic Compare & Write Unit: 1 00:24:27.733 Fused Compare & Write: Supported 00:24:27.733 Scatter-Gather List 00:24:27.733 SGL Command Set: Supported 00:24:27.733 SGL Keyed: Supported 00:24:27.733 SGL Bit Bucket Descriptor: Not Supported 00:24:27.733 SGL Metadata Pointer: Not Supported 00:24:27.733 Oversized SGL: Not Supported 00:24:27.733 SGL Metadata Address: Not Supported 00:24:27.733 SGL Offset: Supported 00:24:27.733 Transport SGL Data Block: Not Supported 00:24:27.733 Replay Protected Memory Block: Not Supported 00:24:27.733 00:24:27.733 Firmware Slot Information 00:24:27.733 ========================= 00:24:27.733 Active slot: 0 00:24:27.733 00:24:27.733 00:24:27.733 Error Log 00:24:27.733 ========= 00:24:27.733 00:24:27.733 Active Namespaces 00:24:27.733 ================= 00:24:27.733 Discovery Log Page 00:24:27.733 ================== 00:24:27.733 Generation Counter: 2 00:24:27.733 Number of Records: 2 00:24:27.733 Record Format: 0 00:24:27.733 00:24:27.733 Discovery Log Entry 0 00:24:27.733 ---------------------- 00:24:27.733 Transport Type: 3 (TCP) 00:24:27.733 Address Family: 1 (IPv4) 00:24:27.733 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:27.733 Entry Flags: 00:24:27.733 Duplicate Returned Information: 1 00:24:27.733 Explicit Persistent Connection Support for Discovery: 1 00:24:27.733 Transport Requirements: 00:24:27.733 Secure Channel: Not Required 00:24:27.733 Port ID: 0 (0x0000) 00:24:27.733 Controller ID: 65535 (0xffff) 00:24:27.733 Admin Max SQ Size: 128 00:24:27.733 Transport Service Identifier: 4420 00:24:27.733 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:27.733 Transport Address: 10.0.0.2 00:24:27.733 Discovery Log Entry 1 00:24:27.733 ---------------------- 00:24:27.733 Transport Type: 3 (TCP) 00:24:27.733 Address Family: 1 (IPv4) 00:24:27.733 Subsystem Type: 2 (NVM Subsystem) 00:24:27.733 Entry Flags: 00:24:27.733 Duplicate Returned Information: 0 00:24:27.733 Explicit Persistent Connection Support for Discovery: 0 00:24:27.733 Transport Requirements: 00:24:27.733 Secure Channel: Not Required 00:24:27.733 Port ID: 0 (0x0000) 00:24:27.733 Controller ID: 65535 (0xffff) 00:24:27.733 Admin Max SQ Size: 128 00:24:27.733 Transport Service Identifier: 4420 00:24:27.733 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:27.733 Transport Address: 10.0.0.2 [2024-11-20 07:22:49.936303] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:27.733 [2024-11-20 07:22:49.936314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24100) on tqpair=0x9c2690 00:24:27.734 [2024-11-20 07:22:49.936320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.734 [2024-11-20 07:22:49.936327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24280) on tqpair=0x9c2690 00:24:27.734 [2024-11-20 07:22:49.936331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.734 [2024-11-20 07:22:49.936336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24400) on tqpair=0x9c2690 00:24:27.734 [2024-11-20 07:22:49.936341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.734 [2024-11-20 07:22:49.936346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.734 [2024-11-20 07:22:49.936350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.734 [2024-11-20 07:22:49.936362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.936367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.936371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.734 [2024-11-20 07:22:49.936378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.734 [2024-11-20 07:22:49.936393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.734 [2024-11-20 07:22:49.936597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.734 [2024-11-20 07:22:49.936603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.734 [2024-11-20 07:22:49.936607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.936611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.734 [2024-11-20 07:22:49.936618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.936622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.936628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.734 [2024-11-20 07:22:49.936635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.734 [2024-11-20 07:22:49.936648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.734 [2024-11-20 07:22:49.936876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.734 [2024-11-20 07:22:49.936884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.734 [2024-11-20 07:22:49.936887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.936891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.734 [2024-11-20 07:22:49.936896] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:27.734 [2024-11-20 07:22:49.936901] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:27.734 [2024-11-20 07:22:49.936911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.936915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.936918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.734 [2024-11-20 07:22:49.936925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.734 [2024-11-20 07:22:49.936936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.734 [2024-11-20 07:22:49.937104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.734 [2024-11-20 07:22:49.937110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.734 [2024-11-20 07:22:49.937113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.937117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.734 [2024-11-20 07:22:49.937128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.937132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.937136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.734 [2024-11-20 07:22:49.937142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.734 [2024-11-20 07:22:49.937153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.734 [2024-11-20 07:22:49.937339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.734 [2024-11-20 07:22:49.937346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.734 [2024-11-20 07:22:49.937349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.937353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.734 [2024-11-20 07:22:49.937363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.937367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.937371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.734 [2024-11-20 07:22:49.937377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.734 [2024-11-20 07:22:49.937388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.734 [2024-11-20 07:22:49.937581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.734 [2024-11-20 07:22:49.937587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.734 [2024-11-20 07:22:49.937591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.937595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.734 [2024-11-20 07:22:49.937607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.937611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.937615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.734 [2024-11-20 07:22:49.937622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.734 [2024-11-20 07:22:49.937632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.734 [2024-11-20 07:22:49.937814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.734 [2024-11-20 07:22:49.937820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.734 [2024-11-20 07:22:49.937823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.937827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.734 [2024-11-20 07:22:49.937837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.937841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.937844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.734 [2024-11-20 07:22:49.937851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.734 [2024-11-20 07:22:49.937862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.734 [2024-11-20 07:22:49.938031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.734 [2024-11-20 07:22:49.938037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.734 [2024-11-20 07:22:49.938040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.938044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.734 [2024-11-20 07:22:49.938054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.938058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.938061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.734 [2024-11-20 07:22:49.938068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.734 [2024-11-20 07:22:49.938079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.734 [2024-11-20 07:22:49.938248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.734 [2024-11-20 07:22:49.938254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.734 [2024-11-20 07:22:49.938258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.938262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.734 [2024-11-20 07:22:49.938272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.938276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.938279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.734 [2024-11-20 07:22:49.938286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.734 [2024-11-20 07:22:49.938296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.734 [2024-11-20 07:22:49.938467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.734 [2024-11-20 07:22:49.938473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.734 [2024-11-20 07:22:49.938477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.938480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.734 [2024-11-20 07:22:49.938490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.938496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.938500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.734 [2024-11-20 07:22:49.938507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.734 [2024-11-20 07:22:49.938517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.734 [2024-11-20 07:22:49.938738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.734 [2024-11-20 07:22:49.938745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.734 [2024-11-20 07:22:49.938748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.938752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.734 [2024-11-20 07:22:49.938763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.938766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.734 [2024-11-20 07:22:49.938770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.734 [2024-11-20 07:22:49.938777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.734 [2024-11-20 07:22:49.938787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.735 [2024-11-20 07:22:49.938969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.735 [2024-11-20 07:22:49.938976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.735 [2024-11-20 07:22:49.938979] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.938983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.735 [2024-11-20 07:22:49.938993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.938997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.939001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.735 [2024-11-20 07:22:49.939007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.735 [2024-11-20 07:22:49.939017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.735 [2024-11-20 07:22:49.939232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.735 [2024-11-20 07:22:49.939238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.735 [2024-11-20 07:22:49.939242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.939246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.735 [2024-11-20 07:22:49.939255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.939259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.939263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.735 [2024-11-20 07:22:49.939269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.735 [2024-11-20 07:22:49.939280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.735 [2024-11-20 07:22:49.939463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.735 [2024-11-20 07:22:49.939470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.735 [2024-11-20 07:22:49.939473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.939477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.735 [2024-11-20 07:22:49.939488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.939491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.939495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.735 [2024-11-20 07:22:49.939504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.735 [2024-11-20 07:22:49.939515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.735 [2024-11-20 07:22:49.939721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.735 [2024-11-20 07:22:49.939728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.735 [2024-11-20 07:22:49.939731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.939735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.735 [2024-11-20 07:22:49.939745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.939748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.939752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.735 [2024-11-20 07:22:49.939759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.735 [2024-11-20 07:22:49.939770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.735 [2024-11-20 07:22:49.939947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.735 [2024-11-20 07:22:49.939954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.735 [2024-11-20 07:22:49.939958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.939961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.735 [2024-11-20 07:22:49.939972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.939976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.939979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.735 [2024-11-20 07:22:49.939986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.735 [2024-11-20 07:22:49.939997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.735 [2024-11-20 07:22:49.944170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.735 [2024-11-20 07:22:49.944179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.735 [2024-11-20 07:22:49.944183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.944186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.735 [2024-11-20 07:22:49.944197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.944201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.944205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9c2690) 00:24:27.735 [2024-11-20 07:22:49.944212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.735 [2024-11-20 07:22:49.944223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa24580, cid 3, qid 0 00:24:27.735 [2024-11-20 07:22:49.944420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.735 [2024-11-20 07:22:49.944426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.735 [2024-11-20 07:22:49.944430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.735 [2024-11-20 07:22:49.944434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa24580) on tqpair=0x9c2690 00:24:27.735 [2024-11-20 07:22:49.944441] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:24:27.735 00:24:27.735 07:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:27.735 [2024-11-20 07:22:49.991393] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:24:27.735 [2024-11-20 07:22:49.991434] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613966 ] 00:24:28.000 [2024-11-20 07:22:50.047781] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:28.000 [2024-11-20 07:22:50.047847] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:28.000 [2024-11-20 07:22:50.047854] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:28.000 [2024-11-20 07:22:50.047876] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:28.000 [2024-11-20 07:22:50.047890] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:28.000 [2024-11-20 07:22:50.051498] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:28.000 [2024-11-20 07:22:50.051538] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b9a690 0 00:24:28.000 [2024-11-20 07:22:50.059187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:28.000 [2024-11-20 07:22:50.059205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:28.000 [2024-11-20 07:22:50.059210] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:28.000 [2024-11-20 07:22:50.059214] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:28.000 [2024-11-20 07:22:50.059251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.059258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.059262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9a690) 00:24:28.000 [2024-11-20 07:22:50.059277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:28.000 [2024-11-20 07:22:50.059299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc100, cid 0, qid 0 00:24:28.000 [2024-11-20 07:22:50.066180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.000 [2024-11-20 07:22:50.066193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.000 [2024-11-20 07:22:50.066197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.066202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc100) on tqpair=0x1b9a690 00:24:28.000 [2024-11-20 07:22:50.066214] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:28.000 [2024-11-20 07:22:50.066222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:28.000 [2024-11-20 07:22:50.066228] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:28.000 [2024-11-20 07:22:50.066243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.066248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.066252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9a690) 00:24:28.000 [2024-11-20 07:22:50.066261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.000 [2024-11-20 07:22:50.066278] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc100, cid 0, qid 0 00:24:28.000 [2024-11-20 07:22:50.066467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.000 [2024-11-20 07:22:50.066480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.000 [2024-11-20 07:22:50.066485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.066489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc100) on tqpair=0x1b9a690 00:24:28.000 [2024-11-20 07:22:50.066494] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:28.000 [2024-11-20 07:22:50.066502] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:28.000 [2024-11-20 07:22:50.066509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.066513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.066517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9a690) 00:24:28.000 [2024-11-20 07:22:50.066523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.000 [2024-11-20 07:22:50.066534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc100, cid 0, qid 0 00:24:28.000 [2024-11-20 07:22:50.066711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.000 [2024-11-20 07:22:50.066719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.000 [2024-11-20 07:22:50.066722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.066726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc100) on tqpair=0x1b9a690 00:24:28.000 [2024-11-20 07:22:50.066731] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:28.000 [2024-11-20 07:22:50.066740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:28.000 [2024-11-20 07:22:50.066747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.066750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.066754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9a690) 00:24:28.000 [2024-11-20 07:22:50.066761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.000 [2024-11-20 07:22:50.066771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc100, cid 0, qid 0 00:24:28.000 [2024-11-20 07:22:50.067008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.000 [2024-11-20 07:22:50.067016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.000 [2024-11-20 07:22:50.067021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.067025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc100) on tqpair=0x1b9a690 00:24:28.000 [2024-11-20 07:22:50.067030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:28.000 [2024-11-20 07:22:50.067040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.067044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.067048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9a690) 00:24:28.000 [2024-11-20 07:22:50.067054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.000 [2024-11-20 07:22:50.067065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc100, cid 0, qid 0 00:24:28.000 [2024-11-20 07:22:50.067302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.000 [2024-11-20 07:22:50.067309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.000 [2024-11-20 07:22:50.067313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.067317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc100) on tqpair=0x1b9a690 00:24:28.000 [2024-11-20 07:22:50.067326] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:28.000 [2024-11-20 07:22:50.067331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:28.000 [2024-11-20 07:22:50.067340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:28.000 [2024-11-20 07:22:50.067451] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:28.000 [2024-11-20 07:22:50.067456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:28.000 [2024-11-20 07:22:50.067465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.067469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.067473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9a690) 00:24:28.000 [2024-11-20 07:22:50.067479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.000 [2024-11-20 07:22:50.067491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc100, cid 0, qid 0 00:24:28.000 [2024-11-20 07:22:50.067665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.000 [2024-11-20 07:22:50.067672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.000 [2024-11-20 07:22:50.067675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.067679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc100) on tqpair=0x1b9a690 00:24:28.000 [2024-11-20 07:22:50.067684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:28.000 [2024-11-20 07:22:50.067693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.000 [2024-11-20 07:22:50.067697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.067701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9a690) 00:24:28.001 [2024-11-20 07:22:50.067708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.001 [2024-11-20 07:22:50.067719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc100, cid 0, qid 0 00:24:28.001 [2024-11-20 07:22:50.067899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.001 [2024-11-20 07:22:50.067907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.001 [2024-11-20 07:22:50.067910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.067915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc100) on tqpair=0x1b9a690 00:24:28.001 [2024-11-20 07:22:50.067920] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:28.001 [2024-11-20 07:22:50.067926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:28.001 [2024-11-20 07:22:50.067934] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:28.001 [2024-11-20 07:22:50.067943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:28.001 [2024-11-20 07:22:50.067954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.067959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9a690) 00:24:28.001 [2024-11-20 07:22:50.067966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.001 [2024-11-20 07:22:50.067981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc100, cid 0, qid 0 00:24:28.001 [2024-11-20 07:22:50.068236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.001 [2024-11-20 07:22:50.068245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.001 [2024-11-20 07:22:50.068249] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068254] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9a690): datao=0, datal=4096, cccid=0 00:24:28.001 [2024-11-20 07:22:50.068261] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bfc100) on tqpair(0x1b9a690): expected_datao=0, payload_size=4096 00:24:28.001 [2024-11-20 07:22:50.068266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068275] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068281] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.001 [2024-11-20 07:22:50.068405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.001 [2024-11-20 07:22:50.068409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc100) on tqpair=0x1b9a690 00:24:28.001 [2024-11-20 07:22:50.068424] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:28.001 [2024-11-20 07:22:50.068431] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:28.001 [2024-11-20 07:22:50.068437] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:28.001 [2024-11-20 07:22:50.068450] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:28.001 [2024-11-20 07:22:50.068457] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:28.001 [2024-11-20 07:22:50.068463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:28.001 [2024-11-20 07:22:50.068475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:28.001 [2024-11-20 07:22:50.068483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9a690) 00:24:28.001 [2024-11-20 07:22:50.068503] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:28.001 [2024-11-20 07:22:50.068515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc100, cid 0, qid 0 00:24:28.001 [2024-11-20 07:22:50.068717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.001 [2024-11-20 07:22:50.068726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.001 [2024-11-20 07:22:50.068731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc100) on tqpair=0x1b9a690 00:24:28.001 [2024-11-20 07:22:50.068745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b9a690) 00:24:28.001 [2024-11-20 07:22:50.068763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.001 [2024-11-20 07:22:50.068769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b9a690) 00:24:28.001 [2024-11-20 07:22:50.068786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.001 [2024-11-20 07:22:50.068792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b9a690) 00:24:28.001 [2024-11-20 07:22:50.068806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.001 [2024-11-20 07:22:50.068813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9a690) 00:24:28.001 [2024-11-20 07:22:50.068829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.001 [2024-11-20 07:22:50.068835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:28.001 [2024-11-20 07:22:50.068845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:28.001 [2024-11-20 07:22:50.068852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.068855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b9a690) 00:24:28.001 [2024-11-20 07:22:50.068862] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.001 [2024-11-20 07:22:50.068876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc100, cid 0, qid 0 00:24:28.001 [2024-11-20 07:22:50.068882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc280, cid 1, qid 0 00:24:28.001 [2024-11-20 07:22:50.068887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc400, cid 2, qid 0 00:24:28.001 [2024-11-20 07:22:50.068893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc580, cid 3, qid 0 00:24:28.001 [2024-11-20 07:22:50.068898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc700, cid 4, qid 0 00:24:28.001 [2024-11-20 07:22:50.069120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.001 [2024-11-20 07:22:50.069128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.001 [2024-11-20 07:22:50.069131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.069135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc700) on tqpair=0x1b9a690 00:24:28.001 [2024-11-20 07:22:50.069143] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:28.001 [2024-11-20 07:22:50.069148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:28.001 [2024-11-20 07:22:50.069156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:28.001 [2024-11-20 07:22:50.069175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:28.001 [2024-11-20 07:22:50.069181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.069185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.069189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b9a690) 00:24:28.001 [2024-11-20 07:22:50.069195] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:28.001 [2024-11-20 07:22:50.069209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc700, cid 4, qid 0 00:24:28.001 [2024-11-20 07:22:50.069419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.001 [2024-11-20 07:22:50.069427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.001 [2024-11-20 07:22:50.069430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.069434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc700) on tqpair=0x1b9a690 00:24:28.001 [2024-11-20 07:22:50.069500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:28.001 [2024-11-20 07:22:50.069511] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:28.001 [2024-11-20 07:22:50.069518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.001 [2024-11-20 07:22:50.069522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b9a690) 00:24:28.001 [2024-11-20 07:22:50.069529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.002 [2024-11-20 07:22:50.069539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc700, cid 4, qid 0 00:24:28.002 [2024-11-20 07:22:50.069767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.002 [2024-11-20 07:22:50.069773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.002 [2024-11-20 07:22:50.069777] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.069781] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9a690): datao=0, datal=4096, cccid=4 00:24:28.002 [2024-11-20 07:22:50.069785] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bfc700) on tqpair(0x1b9a690): expected_datao=0, payload_size=4096 00:24:28.002 [2024-11-20 07:22:50.069791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.069808] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.069812] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.069964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.002 [2024-11-20 07:22:50.069971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.002 [2024-11-20 07:22:50.069975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.069979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc700) on tqpair=0x1b9a690 00:24:28.002 [2024-11-20 07:22:50.069989] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:28.002 [2024-11-20 07:22:50.070003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:28.002 [2024-11-20 07:22:50.070012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:28.002 [2024-11-20 07:22:50.070019] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.070024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b9a690) 00:24:28.002 [2024-11-20 07:22:50.070031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.002 [2024-11-20 07:22:50.070042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc700, cid 4, qid 0 00:24:28.002 [2024-11-20 07:22:50.074177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.002 [2024-11-20 07:22:50.074187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.002 [2024-11-20 07:22:50.074191] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.074195] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9a690): datao=0, datal=4096, cccid=4 00:24:28.002 [2024-11-20 07:22:50.074204] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bfc700) on tqpair(0x1b9a690): expected_datao=0, payload_size=4096 00:24:28.002 [2024-11-20 07:22:50.074208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.074215] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.074218] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.074224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.002 [2024-11-20 07:22:50.074230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.002 [2024-11-20 07:22:50.074234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.074239] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc700) on tqpair=0x1b9a690 00:24:28.002 [2024-11-20 07:22:50.074253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:28.002 [2024-11-20 07:22:50.074264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:28.002 [2024-11-20 07:22:50.074271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.074275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b9a690) 00:24:28.002 [2024-11-20 07:22:50.074282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.002 [2024-11-20 07:22:50.074295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc700, cid 4, qid 0 00:24:28.002 [2024-11-20 07:22:50.074493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.002 [2024-11-20 07:22:50.074500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.002 [2024-11-20 07:22:50.074503] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.074507] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9a690): datao=0, datal=4096, cccid=4 00:24:28.002 [2024-11-20 07:22:50.074512] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bfc700) on tqpair(0x1b9a690): expected_datao=0, payload_size=4096 00:24:28.002 [2024-11-20 07:22:50.074516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.074533] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.074538] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.074704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.002 [2024-11-20 07:22:50.074710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.002 [2024-11-20 07:22:50.074714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.074718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc700) on tqpair=0x1b9a690 00:24:28.002 [2024-11-20 07:22:50.074726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:28.002 [2024-11-20 07:22:50.074734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:28.002 [2024-11-20 07:22:50.074744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:28.002 [2024-11-20 07:22:50.074750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:28.002 [2024-11-20 07:22:50.074755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:28.002 [2024-11-20 07:22:50.074761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:28.002 [2024-11-20 07:22:50.074769] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:28.002 [2024-11-20 07:22:50.074774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:28.002 [2024-11-20 07:22:50.074780] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:28.002 [2024-11-20 07:22:50.074798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.074802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b9a690) 00:24:28.002 [2024-11-20 07:22:50.074808] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.002 [2024-11-20 07:22:50.074815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.074819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.074823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b9a690) 00:24:28.002 [2024-11-20 07:22:50.074829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.002 [2024-11-20 07:22:50.074844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc700, cid 4, qid 0 00:24:28.002 [2024-11-20 07:22:50.074850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc880, cid 5, qid 0 00:24:28.002 [2024-11-20 07:22:50.075080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.002 [2024-11-20 07:22:50.075087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.002 [2024-11-20 07:22:50.075091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.075095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc700) on tqpair=0x1b9a690 00:24:28.002 [2024-11-20 07:22:50.075102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.002 [2024-11-20 07:22:50.075108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.002 [2024-11-20 07:22:50.075111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.075115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc880) on tqpair=0x1b9a690 00:24:28.002 [2024-11-20 07:22:50.075124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.002 [2024-11-20 07:22:50.075129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b9a690) 00:24:28.002 [2024-11-20 07:22:50.075135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.003 [2024-11-20 07:22:50.075145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc880, cid 5, qid 0 00:24:28.003 [2024-11-20 07:22:50.075359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.003 [2024-11-20 07:22:50.075367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.003 [2024-11-20 07:22:50.075371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.075375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc880) on tqpair=0x1b9a690 00:24:28.003 [2024-11-20 07:22:50.075384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.075389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b9a690) 00:24:28.003 [2024-11-20 07:22:50.075395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.003 [2024-11-20 07:22:50.075406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc880, cid 5, qid 0 00:24:28.003 [2024-11-20 07:22:50.075582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.003 [2024-11-20 07:22:50.075589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.003 [2024-11-20 07:22:50.075592] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.075598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc880) on tqpair=0x1b9a690 00:24:28.003 [2024-11-20 07:22:50.075608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.075613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b9a690) 00:24:28.003 [2024-11-20 07:22:50.075619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.003 [2024-11-20 07:22:50.075630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc880, cid 5, qid 0 00:24:28.003 [2024-11-20 07:22:50.075844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.003 [2024-11-20 07:22:50.075851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.003 [2024-11-20 07:22:50.075855] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.075858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc880) on tqpair=0x1b9a690 00:24:28.003 [2024-11-20 07:22:50.075875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.075879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b9a690) 00:24:28.003 [2024-11-20 07:22:50.075886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.003 [2024-11-20 07:22:50.075894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.075897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b9a690) 00:24:28.003 [2024-11-20 07:22:50.075904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.003 [2024-11-20 07:22:50.075911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.075915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b9a690) 00:24:28.003 [2024-11-20 07:22:50.075921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.003 [2024-11-20 07:22:50.075929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.075933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b9a690) 00:24:28.003 [2024-11-20 07:22:50.075940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.003 [2024-11-20 07:22:50.075952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc880, cid 5, qid 0 00:24:28.003 [2024-11-20 07:22:50.075957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc700, cid 4, qid 0 00:24:28.003 [2024-11-20 07:22:50.075962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfca00, cid 6, qid 0 00:24:28.003 [2024-11-20 07:22:50.075967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfcb80, cid 7, qid 0 00:24:28.003 [2024-11-20 07:22:50.076278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.003 [2024-11-20 07:22:50.076286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.003 [2024-11-20 07:22:50.076289] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076294] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9a690): datao=0, datal=8192, cccid=5 00:24:28.003 [2024-11-20 07:22:50.076299] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bfc880) on tqpair(0x1b9a690): expected_datao=0, payload_size=8192 00:24:28.003 [2024-11-20 07:22:50.076303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076374] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076379] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.003 [2024-11-20 07:22:50.076394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.003 [2024-11-20 07:22:50.076397] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076401] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9a690): datao=0, datal=512, cccid=4 00:24:28.003 [2024-11-20 07:22:50.076405] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bfc700) on tqpair(0x1b9a690): expected_datao=0, payload_size=512 00:24:28.003 [2024-11-20 07:22:50.076411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076418] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076421] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.003 [2024-11-20 07:22:50.076433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.003 [2024-11-20 07:22:50.076437] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076440] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9a690): datao=0, datal=512, cccid=6 00:24:28.003 [2024-11-20 07:22:50.076445] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bfca00) on tqpair(0x1b9a690): expected_datao=0, payload_size=512 00:24:28.003 [2024-11-20 07:22:50.076449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076455] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076459] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.003 [2024-11-20 07:22:50.076472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.003 [2024-11-20 07:22:50.076477] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076481] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b9a690): datao=0, datal=4096, cccid=7 00:24:28.003 [2024-11-20 07:22:50.076486] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bfcb80) on tqpair(0x1b9a690): expected_datao=0, payload_size=4096 00:24:28.003 [2024-11-20 07:22:50.076490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076502] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076506] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.003 [2024-11-20 07:22:50.076526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.003 [2024-11-20 07:22:50.076529] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc880) on tqpair=0x1b9a690 00:24:28.003 [2024-11-20 07:22:50.076548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.003 [2024-11-20 07:22:50.076555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.003 [2024-11-20 07:22:50.076558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc700) on tqpair=0x1b9a690 00:24:28.003 [2024-11-20 07:22:50.076573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.003 [2024-11-20 07:22:50.076579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.003 [2024-11-20 07:22:50.076582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfca00) on tqpair=0x1b9a690 00:24:28.003 [2024-11-20 07:22:50.076593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.003 [2024-11-20 07:22:50.076599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.003 [2024-11-20 07:22:50.076602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.003 [2024-11-20 07:22:50.076608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfcb80) on tqpair=0x1b9a690 00:24:28.003 ===================================================== 00:24:28.003 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.003 ===================================================== 00:24:28.003 Controller Capabilities/Features 00:24:28.003 ================================ 00:24:28.003 Vendor ID: 8086 00:24:28.003 Subsystem Vendor ID: 8086 00:24:28.003 Serial Number: SPDK00000000000001 00:24:28.003 Model Number: SPDK bdev Controller 00:24:28.003 Firmware Version: 25.01 00:24:28.003 Recommended Arb Burst: 6 00:24:28.003 IEEE OUI Identifier: e4 d2 5c 00:24:28.003 Multi-path I/O 00:24:28.003 May have multiple subsystem ports: Yes 00:24:28.003 May have multiple controllers: Yes 00:24:28.003 Associated with SR-IOV VF: No 00:24:28.003 Max Data Transfer Size: 131072 00:24:28.003 Max Number of Namespaces: 32 00:24:28.003 Max Number of I/O Queues: 127 00:24:28.003 NVMe Specification Version (VS): 1.3 00:24:28.003 NVMe Specification Version (Identify): 1.3 00:24:28.003 Maximum Queue Entries: 128 00:24:28.003 Contiguous Queues Required: Yes 00:24:28.003 Arbitration Mechanisms Supported 00:24:28.003 Weighted Round Robin: Not Supported 00:24:28.003 Vendor Specific: Not Supported 00:24:28.003 Reset Timeout: 15000 ms 00:24:28.003 Doorbell Stride: 4 bytes 00:24:28.003 NVM Subsystem Reset: Not Supported 00:24:28.003 Command Sets Supported 00:24:28.003 NVM Command Set: Supported 00:24:28.004 Boot Partition: Not Supported 00:24:28.004 Memory Page Size Minimum: 4096 bytes 00:24:28.004 Memory Page Size Maximum: 4096 bytes 00:24:28.004 Persistent Memory Region: Not Supported 00:24:28.004 Optional Asynchronous Events Supported 00:24:28.004 Namespace Attribute Notices: Supported 00:24:28.004 Firmware Activation Notices: Not Supported 00:24:28.004 ANA Change Notices: Not Supported 00:24:28.004 PLE Aggregate Log Change Notices: Not Supported 00:24:28.004 LBA Status Info Alert Notices: Not Supported 00:24:28.004 EGE Aggregate Log Change Notices: Not Supported 00:24:28.004 Normal NVM Subsystem Shutdown event: Not Supported 00:24:28.004 Zone Descriptor Change Notices: Not Supported 00:24:28.004 Discovery Log Change Notices: Not Supported 00:24:28.004 Controller Attributes 00:24:28.004 128-bit Host Identifier: Supported 00:24:28.004 Non-Operational Permissive Mode: Not Supported 00:24:28.004 NVM Sets: Not Supported 00:24:28.004 Read Recovery Levels: Not Supported 00:24:28.004 Endurance Groups: Not Supported 00:24:28.004 Predictable Latency Mode: Not Supported 00:24:28.004 Traffic Based Keep ALive: Not Supported 00:24:28.004 Namespace Granularity: Not Supported 00:24:28.004 SQ Associations: Not Supported 00:24:28.004 UUID List: Not Supported 00:24:28.004 Multi-Domain Subsystem: Not Supported 00:24:28.004 Fixed Capacity Management: Not Supported 00:24:28.004 Variable Capacity Management: Not Supported 00:24:28.004 Delete Endurance Group: Not Supported 00:24:28.004 Delete NVM Set: Not Supported 00:24:28.004 Extended LBA Formats Supported: Not Supported 00:24:28.004 Flexible Data Placement Supported: Not Supported 00:24:28.004 00:24:28.004 Controller Memory Buffer Support 00:24:28.004 ================================ 00:24:28.004 Supported: No 00:24:28.004 00:24:28.004 Persistent Memory Region Support 00:24:28.004 ================================ 00:24:28.004 Supported: No 00:24:28.004 00:24:28.004 Admin Command Set Attributes 00:24:28.004 ============================ 00:24:28.004 Security Send/Receive: Not Supported 00:24:28.004 Format NVM: Not Supported 00:24:28.004 Firmware Activate/Download: Not Supported 00:24:28.004 Namespace Management: Not Supported 00:24:28.004 Device Self-Test: Not Supported 00:24:28.004 Directives: Not Supported 00:24:28.004 NVMe-MI: Not Supported 00:24:28.004 Virtualization Management: Not Supported 00:24:28.004 Doorbell Buffer Config: Not Supported 00:24:28.004 Get LBA Status Capability: Not Supported 00:24:28.004 Command & Feature Lockdown Capability: Not Supported 00:24:28.004 Abort Command Limit: 4 00:24:28.004 Async Event Request Limit: 4 00:24:28.004 Number of Firmware Slots: N/A 00:24:28.004 Firmware Slot 1 Read-Only: N/A 00:24:28.004 Firmware Activation Without Reset: N/A 00:24:28.004 Multiple Update Detection Support: N/A 00:24:28.004 Firmware Update Granularity: No Information Provided 00:24:28.004 Per-Namespace SMART Log: No 00:24:28.004 Asymmetric Namespace Access Log Page: Not Supported 00:24:28.004 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:28.004 Command Effects Log Page: Supported 00:24:28.004 Get Log Page Extended Data: Supported 00:24:28.004 Telemetry Log Pages: Not Supported 00:24:28.004 Persistent Event Log Pages: Not Supported 00:24:28.004 Supported Log Pages Log Page: May Support 00:24:28.004 Commands Supported & Effects Log Page: Not Supported 00:24:28.004 Feature Identifiers & Effects Log Page:May Support 00:24:28.004 NVMe-MI Commands & Effects Log Page: May Support 00:24:28.004 Data Area 4 for Telemetry Log: Not Supported 00:24:28.004 Error Log Page Entries Supported: 128 00:24:28.004 Keep Alive: Supported 00:24:28.004 Keep Alive Granularity: 10000 ms 00:24:28.004 00:24:28.004 NVM Command Set Attributes 00:24:28.004 ========================== 00:24:28.004 Submission Queue Entry Size 00:24:28.004 Max: 64 00:24:28.004 Min: 64 00:24:28.004 Completion Queue Entry Size 00:24:28.004 Max: 16 00:24:28.004 Min: 16 00:24:28.004 Number of Namespaces: 32 00:24:28.004 Compare Command: Supported 00:24:28.004 Write Uncorrectable Command: Not Supported 00:24:28.004 Dataset Management Command: Supported 00:24:28.004 Write Zeroes Command: Supported 00:24:28.004 Set Features Save Field: Not Supported 00:24:28.004 Reservations: Supported 00:24:28.004 Timestamp: Not Supported 00:24:28.004 Copy: Supported 00:24:28.004 Volatile Write Cache: Present 00:24:28.004 Atomic Write Unit (Normal): 1 00:24:28.004 Atomic Write Unit (PFail): 1 00:24:28.004 Atomic Compare & Write Unit: 1 00:24:28.004 Fused Compare & Write: Supported 00:24:28.004 Scatter-Gather List 00:24:28.004 SGL Command Set: Supported 00:24:28.004 SGL Keyed: Supported 00:24:28.004 SGL Bit Bucket Descriptor: Not Supported 00:24:28.004 SGL Metadata Pointer: Not Supported 00:24:28.004 Oversized SGL: Not Supported 00:24:28.004 SGL Metadata Address: Not Supported 00:24:28.004 SGL Offset: Supported 00:24:28.004 Transport SGL Data Block: Not Supported 00:24:28.004 Replay Protected Memory Block: Not Supported 00:24:28.004 00:24:28.004 Firmware Slot Information 00:24:28.004 ========================= 00:24:28.004 Active slot: 1 00:24:28.004 Slot 1 Firmware Revision: 25.01 00:24:28.004 00:24:28.004 00:24:28.004 Commands Supported and Effects 00:24:28.004 ============================== 00:24:28.004 Admin Commands 00:24:28.004 -------------- 00:24:28.004 Get Log Page (02h): Supported 00:24:28.004 Identify (06h): Supported 00:24:28.004 Abort (08h): Supported 00:24:28.004 Set Features (09h): Supported 00:24:28.004 Get Features (0Ah): Supported 00:24:28.004 Asynchronous Event Request (0Ch): Supported 00:24:28.004 Keep Alive (18h): Supported 00:24:28.004 I/O Commands 00:24:28.004 ------------ 00:24:28.004 Flush (00h): Supported LBA-Change 00:24:28.004 Write (01h): Supported LBA-Change 00:24:28.004 Read (02h): Supported 00:24:28.004 Compare (05h): Supported 00:24:28.004 Write Zeroes (08h): Supported LBA-Change 00:24:28.004 Dataset Management (09h): Supported LBA-Change 00:24:28.004 Copy (19h): Supported LBA-Change 00:24:28.004 00:24:28.004 Error Log 00:24:28.004 ========= 00:24:28.004 00:24:28.004 Arbitration 00:24:28.004 =========== 00:24:28.004 Arbitration Burst: 1 00:24:28.004 00:24:28.004 Power Management 00:24:28.004 ================ 00:24:28.004 Number of Power States: 1 00:24:28.004 Current Power State: Power State #0 00:24:28.004 Power State #0: 00:24:28.004 Max Power: 0.00 W 00:24:28.004 Non-Operational State: Operational 00:24:28.004 Entry Latency: Not Reported 00:24:28.004 Exit Latency: Not Reported 00:24:28.004 Relative Read Throughput: 0 00:24:28.004 Relative Read Latency: 0 00:24:28.004 Relative Write Throughput: 0 00:24:28.005 Relative Write Latency: 0 00:24:28.005 Idle Power: Not Reported 00:24:28.005 Active Power: Not Reported 00:24:28.005 Non-Operational Permissive Mode: Not Supported 00:24:28.005 00:24:28.005 Health Information 00:24:28.005 ================== 00:24:28.005 Critical Warnings: 00:24:28.005 Available Spare Space: OK 00:24:28.005 Temperature: OK 00:24:28.005 Device Reliability: OK 00:24:28.005 Read Only: No 00:24:28.005 Volatile Memory Backup: OK 00:24:28.005 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:28.005 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:28.005 Available Spare: 0% 00:24:28.005 Available Spare Threshold: 0% 00:24:28.005 Life Percentage Used:[2024-11-20 07:22:50.076711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.076717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b9a690) 00:24:28.005 [2024-11-20 07:22:50.076723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.005 [2024-11-20 07:22:50.076736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfcb80, cid 7, qid 0 00:24:28.005 [2024-11-20 07:22:50.076921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.005 [2024-11-20 07:22:50.076930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.005 [2024-11-20 07:22:50.076934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.076937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfcb80) on tqpair=0x1b9a690 00:24:28.005 [2024-11-20 07:22:50.076972] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:28.005 [2024-11-20 07:22:50.076982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc100) on tqpair=0x1b9a690 00:24:28.005 [2024-11-20 07:22:50.076989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.005 [2024-11-20 07:22:50.076996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc280) on tqpair=0x1b9a690 00:24:28.005 [2024-11-20 07:22:50.077000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.005 [2024-11-20 07:22:50.077005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc400) on tqpair=0x1b9a690 00:24:28.005 [2024-11-20 07:22:50.077010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.005 [2024-11-20 07:22:50.077015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc580) on tqpair=0x1b9a690 00:24:28.005 [2024-11-20 07:22:50.077020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.005 [2024-11-20 07:22:50.077028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.077032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.077036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9a690) 00:24:28.005 [2024-11-20 07:22:50.077043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.005 [2024-11-20 07:22:50.077055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc580, cid 3, qid 0 00:24:28.005 [2024-11-20 07:22:50.077263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.005 [2024-11-20 07:22:50.077270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.005 [2024-11-20 07:22:50.077274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.077278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc580) on tqpair=0x1b9a690 00:24:28.005 [2024-11-20 07:22:50.077286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.077289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.077293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9a690) 00:24:28.005 [2024-11-20 07:22:50.077300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.005 [2024-11-20 07:22:50.077315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc580, cid 3, qid 0 00:24:28.005 [2024-11-20 07:22:50.077508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.005 [2024-11-20 07:22:50.077517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.005 [2024-11-20 07:22:50.077525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.077530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc580) on tqpair=0x1b9a690 00:24:28.005 [2024-11-20 07:22:50.077536] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:28.005 [2024-11-20 07:22:50.077542] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:28.005 [2024-11-20 07:22:50.077551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.077555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.077558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9a690) 00:24:28.005 [2024-11-20 07:22:50.077565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.005 [2024-11-20 07:22:50.077576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc580, cid 3, qid 0 00:24:28.005 [2024-11-20 07:22:50.077828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.005 [2024-11-20 07:22:50.077836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.005 [2024-11-20 07:22:50.077839] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.077844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc580) on tqpair=0x1b9a690 00:24:28.005 [2024-11-20 07:22:50.077855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.077859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.077862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9a690) 00:24:28.005 [2024-11-20 07:22:50.077869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.005 [2024-11-20 07:22:50.077881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc580, cid 3, qid 0 00:24:28.005 [2024-11-20 07:22:50.078080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.005 [2024-11-20 07:22:50.078087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.005 [2024-11-20 07:22:50.078091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.078095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc580) on tqpair=0x1b9a690 00:24:28.005 [2024-11-20 07:22:50.078105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.078109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.078112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b9a690) 00:24:28.005 [2024-11-20 07:22:50.078119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.005 [2024-11-20 07:22:50.078129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bfc580, cid 3, qid 0 00:24:28.005 [2024-11-20 07:22:50.082173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.005 [2024-11-20 07:22:50.082185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.005 [2024-11-20 07:22:50.082188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.005 [2024-11-20 07:22:50.082192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bfc580) on tqpair=0x1b9a690 00:24:28.006 [2024-11-20 07:22:50.082201] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:24:28.006 0% 00:24:28.006 Data Units Read: 0 00:24:28.006 Data Units Written: 0 00:24:28.006 Host Read Commands: 0 00:24:28.006 Host Write Commands: 0 00:24:28.006 Controller Busy Time: 0 minutes 00:24:28.006 Power Cycles: 0 00:24:28.006 Power On Hours: 0 hours 00:24:28.006 Unsafe Shutdowns: 0 00:24:28.006 Unrecoverable Media Errors: 0 00:24:28.006 Lifetime Error Log Entries: 0 00:24:28.006 Warning Temperature Time: 0 minutes 00:24:28.006 Critical Temperature Time: 0 minutes 00:24:28.006 00:24:28.006 Number of Queues 00:24:28.006 ================ 00:24:28.006 Number of I/O Submission Queues: 127 00:24:28.006 Number of I/O Completion Queues: 127 00:24:28.006 00:24:28.006 Active Namespaces 00:24:28.006 ================= 00:24:28.006 Namespace ID:1 00:24:28.006 Error Recovery Timeout: Unlimited 00:24:28.006 Command Set Identifier: NVM (00h) 00:24:28.006 Deallocate: Supported 00:24:28.006 Deallocated/Unwritten Error: Not Supported 00:24:28.006 Deallocated Read Value: Unknown 00:24:28.006 Deallocate in Write Zeroes: Not Supported 00:24:28.006 Deallocated Guard Field: 0xFFFF 00:24:28.006 Flush: Supported 00:24:28.006 Reservation: Supported 00:24:28.006 Namespace Sharing Capabilities: Multiple Controllers 00:24:28.006 Size (in LBAs): 131072 (0GiB) 00:24:28.006 Capacity (in LBAs): 131072 (0GiB) 00:24:28.006 Utilization (in LBAs): 131072 (0GiB) 00:24:28.006 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:28.006 EUI64: ABCDEF0123456789 00:24:28.006 UUID: ff3802a3-8412-47ad-a6fd-ed7968baa12f 00:24:28.006 Thin Provisioning: Not Supported 00:24:28.006 Per-NS Atomic Units: Yes 00:24:28.006 Atomic Boundary Size (Normal): 0 00:24:28.006 Atomic Boundary Size (PFail): 0 00:24:28.006 Atomic Boundary Offset: 0 00:24:28.006 Maximum Single Source Range Length: 65535 00:24:28.006 Maximum Copy Length: 65535 00:24:28.006 Maximum Source Range Count: 1 00:24:28.006 NGUID/EUI64 Never Reused: No 00:24:28.006 Namespace Write Protected: No 00:24:28.006 Number of LBA Formats: 1 00:24:28.006 Current LBA Format: LBA Format #00 00:24:28.006 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:28.006 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.006 rmmod nvme_tcp 00:24:28.006 rmmod nvme_fabrics 00:24:28.006 rmmod nvme_keyring 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3613714 ']' 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3613714 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 3613714 ']' 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 3613714 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:28.006 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3613714 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3613714' 00:24:28.267 killing process with pid 3613714 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 3613714 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 3613714 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.267 07:22:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.810 00:24:30.810 real 0m11.662s 00:24:30.810 user 0m8.678s 00:24:30.810 sys 0m6.105s 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:30.810 ************************************ 00:24:30.810 END TEST nvmf_identify 00:24:30.810 ************************************ 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.810 ************************************ 00:24:30.810 START TEST nvmf_perf 00:24:30.810 ************************************ 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:30.810 * Looking for test storage... 00:24:30.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:30.810 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:30.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.811 --rc genhtml_branch_coverage=1 00:24:30.811 --rc genhtml_function_coverage=1 00:24:30.811 --rc genhtml_legend=1 00:24:30.811 --rc geninfo_all_blocks=1 00:24:30.811 --rc geninfo_unexecuted_blocks=1 00:24:30.811 00:24:30.811 ' 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:30.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.811 --rc genhtml_branch_coverage=1 00:24:30.811 --rc genhtml_function_coverage=1 00:24:30.811 --rc genhtml_legend=1 00:24:30.811 --rc geninfo_all_blocks=1 00:24:30.811 --rc geninfo_unexecuted_blocks=1 00:24:30.811 00:24:30.811 ' 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:30.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.811 --rc genhtml_branch_coverage=1 00:24:30.811 --rc genhtml_function_coverage=1 00:24:30.811 --rc genhtml_legend=1 00:24:30.811 --rc geninfo_all_blocks=1 00:24:30.811 --rc geninfo_unexecuted_blocks=1 00:24:30.811 00:24:30.811 ' 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:30.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.811 --rc genhtml_branch_coverage=1 00:24:30.811 --rc genhtml_function_coverage=1 00:24:30.811 --rc genhtml_legend=1 00:24:30.811 --rc geninfo_all_blocks=1 00:24:30.811 --rc geninfo_unexecuted_blocks=1 00:24:30.811 00:24:30.811 ' 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.811 07:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:38.952 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:38.952 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:38.952 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:38.952 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.952 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:24:38.953 00:24:38.953 --- 10.0.0.2 ping statistics --- 00:24:38.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.953 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:24:38.953 00:24:38.953 --- 10.0.0.1 ping statistics --- 00:24:38.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.953 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3618095 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3618095 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 3618095 ']' 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:38.953 07:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.953 [2024-11-20 07:23:00.481467] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:24:38.953 [2024-11-20 07:23:00.481533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.953 [2024-11-20 07:23:00.583053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.953 [2024-11-20 07:23:00.636123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.953 [2024-11-20 07:23:00.636184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.953 [2024-11-20 07:23:00.636193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.953 [2024-11-20 07:23:00.636200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.953 [2024-11-20 07:23:00.636206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.953 [2024-11-20 07:23:00.638274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.953 [2024-11-20 07:23:00.638489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.953 [2024-11-20 07:23:00.638490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.953 [2024-11-20 07:23:00.638332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.214 07:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:39.214 07:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:24:39.214 07:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:39.214 07:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.214 07:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:39.214 07:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.214 07:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:39.214 07:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:39.787 07:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:39.787 07:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:40.048 07:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:40.048 07:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:40.048 07:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:40.048 07:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:40.048 07:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:40.048 07:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:40.048 07:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:40.309 [2024-11-20 07:23:02.464227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.309 07:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.570 07:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:40.570 07:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:40.830 07:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:40.830 07:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:40.830 07:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.091 [2024-11-20 07:23:03.239554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.091 07:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:41.351 07:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:41.351 07:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:41.351 07:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:41.351 07:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:42.732 Initializing NVMe Controllers 00:24:42.732 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:42.732 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:42.732 Initialization complete. Launching workers. 00:24:42.732 ======================================================== 00:24:42.732 Latency(us) 00:24:42.732 Device Information : IOPS MiB/s Average min max 00:24:42.732 PCIE (0000:65:00.0) NSID 1 from core 0: 78753.27 307.63 405.85 13.29 4868.32 00:24:42.732 ======================================================== 00:24:42.732 Total : 78753.27 307.63 405.85 13.29 4868.32 00:24:42.732 00:24:42.732 07:23:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:43.671 Initializing NVMe Controllers 00:24:43.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:43.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:43.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:43.671 Initialization complete. Launching workers. 00:24:43.671 ======================================================== 00:24:43.671 Latency(us) 00:24:43.671 Device Information : IOPS MiB/s Average min max 00:24:43.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 89.76 0.35 11406.12 208.15 46031.41 00:24:43.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 40.89 0.16 24648.50 7963.38 50881.44 00:24:43.671 ======================================================== 00:24:43.671 Total : 130.66 0.51 15550.68 208.15 50881.44 00:24:43.671 00:24:43.932 07:23:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.314 Initializing NVMe Controllers 00:24:45.314 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:45.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:45.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:45.314 Initialization complete. Launching workers. 00:24:45.314 ======================================================== 00:24:45.314 Latency(us) 00:24:45.314 Device Information : IOPS MiB/s Average min max 00:24:45.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11762.00 45.95 2720.82 401.02 6159.02 00:24:45.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3854.00 15.05 8348.29 7220.49 16077.54 00:24:45.314 ======================================================== 00:24:45.314 Total : 15616.00 61.00 4109.67 401.02 16077.54 00:24:45.314 00:24:45.314 07:23:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:45.314 07:23:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:45.314 07:23:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:47.852 Initializing NVMe Controllers 00:24:47.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.852 Controller IO queue size 128, less than required. 00:24:47.852 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.852 Controller IO queue size 128, less than required. 00:24:47.852 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:47.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:47.852 Initialization complete. Launching workers. 00:24:47.852 ======================================================== 00:24:47.852 Latency(us) 00:24:47.852 Device Information : IOPS MiB/s Average min max 00:24:47.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2333.89 583.47 55504.66 34104.92 91521.77 00:24:47.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 584.60 146.15 222491.85 49201.26 354394.55 00:24:47.852 ======================================================== 00:24:47.852 Total : 2918.48 729.62 88953.56 34104.92 354394.55 00:24:47.852 00:24:47.852 07:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:47.852 No valid NVMe controllers or AIO or URING devices found 00:24:47.852 Initializing NVMe Controllers 00:24:47.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.852 Controller IO queue size 128, less than required. 00:24:47.852 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.852 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:47.852 Controller IO queue size 128, less than required. 00:24:47.852 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.852 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:47.852 WARNING: Some requested NVMe devices were skipped 00:24:47.852 07:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:50.397 Initializing NVMe Controllers 00:24:50.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:50.397 Controller IO queue size 128, less than required. 00:24:50.397 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:50.397 Controller IO queue size 128, less than required. 00:24:50.397 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:50.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:50.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:50.397 Initialization complete. Launching workers. 00:24:50.397 00:24:50.397 ==================== 00:24:50.397 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:50.397 TCP transport: 00:24:50.397 polls: 38148 00:24:50.397 idle_polls: 24469 00:24:50.397 sock_completions: 13679 00:24:50.397 nvme_completions: 7335 00:24:50.397 submitted_requests: 10980 00:24:50.397 queued_requests: 1 00:24:50.397 00:24:50.397 ==================== 00:24:50.397 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:50.397 TCP transport: 00:24:50.397 polls: 34458 00:24:50.397 idle_polls: 19109 00:24:50.397 sock_completions: 15349 00:24:50.397 nvme_completions: 7773 00:24:50.397 submitted_requests: 11700 00:24:50.397 queued_requests: 1 00:24:50.397 ======================================================== 00:24:50.397 Latency(us) 00:24:50.397 Device Information : IOPS MiB/s Average min max 00:24:50.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1830.38 457.59 71241.91 35415.87 125186.60 00:24:50.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1939.69 484.92 66131.93 32597.51 125076.95 00:24:50.397 ======================================================== 00:24:50.397 Total : 3770.07 942.52 68612.84 32597.51 125186.60 00:24:50.397 00:24:50.397 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:50.397 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.397 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:50.397 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:50.397 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:50.397 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:50.397 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:50.397 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.397 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:50.398 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.398 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.398 rmmod nvme_tcp 00:24:50.398 rmmod nvme_fabrics 00:24:50.398 rmmod nvme_keyring 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3618095 ']' 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3618095 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 3618095 ']' 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 3618095 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3618095 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3618095' 00:24:50.659 killing process with pid 3618095 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 3618095 00:24:50.659 07:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 3618095 00:24:52.572 07:23:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:52.572 07:23:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:52.572 07:23:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:52.572 07:23:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:52.572 07:23:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:52.572 07:23:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:52.572 07:23:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:52.572 07:23:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:52.572 07:23:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:52.572 07:23:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.572 07:23:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.572 07:23:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.187 07:23:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:55.187 00:24:55.187 real 0m24.165s 00:24:55.187 user 0m57.558s 00:24:55.187 sys 0m8.810s 00:24:55.187 07:23:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:55.187 07:23:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:55.187 ************************************ 00:24:55.187 END TEST nvmf_perf 00:24:55.187 ************************************ 00:24:55.187 07:23:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:55.187 07:23:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:55.187 07:23:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:55.187 07:23:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.187 ************************************ 00:24:55.187 START TEST nvmf_fio_host 00:24:55.187 ************************************ 00:24:55.187 07:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:55.187 * Looking for test storage... 00:24:55.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:55.187 07:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:55.187 07:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:55.187 07:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:55.187 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:55.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.188 --rc genhtml_branch_coverage=1 00:24:55.188 --rc genhtml_function_coverage=1 00:24:55.188 --rc genhtml_legend=1 00:24:55.188 --rc geninfo_all_blocks=1 00:24:55.188 --rc geninfo_unexecuted_blocks=1 00:24:55.188 00:24:55.188 ' 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:55.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.188 --rc genhtml_branch_coverage=1 00:24:55.188 --rc genhtml_function_coverage=1 00:24:55.188 --rc genhtml_legend=1 00:24:55.188 --rc geninfo_all_blocks=1 00:24:55.188 --rc geninfo_unexecuted_blocks=1 00:24:55.188 00:24:55.188 ' 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:55.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.188 --rc genhtml_branch_coverage=1 00:24:55.188 --rc genhtml_function_coverage=1 00:24:55.188 --rc genhtml_legend=1 00:24:55.188 --rc geninfo_all_blocks=1 00:24:55.188 --rc geninfo_unexecuted_blocks=1 00:24:55.188 00:24:55.188 ' 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:55.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.188 --rc genhtml_branch_coverage=1 00:24:55.188 --rc genhtml_function_coverage=1 00:24:55.188 --rc genhtml_legend=1 00:24:55.188 --rc geninfo_all_blocks=1 00:24:55.188 --rc geninfo_unexecuted_blocks=1 00:24:55.188 00:24:55.188 ' 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.188 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:55.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:55.189 07:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:03.441 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:03.441 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:03.441 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:03.441 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:03.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:25:03.441 00:25:03.441 --- 10.0.0.2 ping statistics --- 00:25:03.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.441 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:25:03.441 00:25:03.441 --- 10.0.0.1 ping statistics --- 00:25:03.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.441 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.441 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3625135 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3625135 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 3625135 ']' 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:03.442 07:23:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.442 [2024-11-20 07:23:24.724061] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:25:03.442 [2024-11-20 07:23:24.724127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.442 [2024-11-20 07:23:24.822694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.442 [2024-11-20 07:23:24.875681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.442 [2024-11-20 07:23:24.875732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.442 [2024-11-20 07:23:24.875746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.442 [2024-11-20 07:23:24.875754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.442 [2024-11-20 07:23:24.875760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.442 [2024-11-20 07:23:24.878129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.442 [2024-11-20 07:23:24.878269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.442 [2024-11-20 07:23:24.878606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.442 [2024-11-20 07:23:24.878611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.442 07:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:03.442 07:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:25:03.442 07:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:03.703 [2024-11-20 07:23:25.723568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.703 07:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:03.703 07:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:03.703 07:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.703 07:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:03.964 Malloc1 00:25:03.964 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:04.225 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:04.225 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.486 [2024-11-20 07:23:26.604650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.486 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:04.747 07:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:05.008 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:05.008 fio-3.35 00:25:05.008 Starting 1 thread 00:25:07.573 00:25:07.573 test: (groupid=0, jobs=1): err= 0: pid=3625890: Wed Nov 20 07:23:29 2024 00:25:07.573 read: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2005msec) 00:25:07.573 slat (usec): min=2, max=284, avg= 2.16, stdev= 2.37 00:25:07.573 clat (usec): min=3324, max=9324, avg=5108.43, stdev=390.37 00:25:07.573 lat (usec): min=3327, max=9326, avg=5110.59, stdev=390.54 00:25:07.573 clat percentiles (usec): 00:25:07.573 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:25:07.573 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:25:07.573 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:25:07.573 | 99.00th=[ 5932], 99.50th=[ 6456], 99.90th=[ 8586], 99.95th=[ 8717], 00:25:07.573 | 99.99th=[ 9241] 00:25:07.573 bw ( KiB/s): min=54176, max=55704, per=100.00%, avg=55240.00, stdev=716.23, samples=4 00:25:07.573 iops : min=13544, max=13926, avg=13810.00, stdev=179.06, samples=4 00:25:07.573 write: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2005msec); 0 zone resets 00:25:07.573 slat (usec): min=2, max=264, avg= 2.22, stdev= 1.78 00:25:07.573 clat (usec): min=2743, max=8256, avg=4125.75, stdev=337.81 00:25:07.573 lat (usec): min=2746, max=8258, avg=4127.97, stdev=338.09 00:25:07.573 clat percentiles (usec): 00:25:07.573 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3884], 00:25:07.573 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:25:07.573 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:25:07.573 | 99.00th=[ 5014], 99.50th=[ 5604], 99.90th=[ 7177], 99.95th=[ 7373], 00:25:07.573 | 99.99th=[ 8094] 00:25:07.573 bw ( KiB/s): min=54560, max=55616, per=100.00%, avg=55198.00, stdev=463.74, samples=4 00:25:07.573 iops : min=13640, max=13904, avg=13799.50, stdev=115.94, samples=4 00:25:07.573 lat (msec) : 4=16.52%, 10=83.48% 00:25:07.573 cpu : usr=74.90%, sys=23.85%, ctx=26, majf=0, minf=17 00:25:07.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:07.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:07.573 issued rwts: total=27680,27666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:07.573 00:25:07.573 Run status group 0 (all jobs): 00:25:07.573 READ: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2005-2005msec 00:25:07.573 WRITE: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2005-2005msec 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:07.573 07:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:07.834 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:07.834 fio-3.35 00:25:07.834 Starting 1 thread 00:25:10.379 00:25:10.379 test: (groupid=0, jobs=1): err= 0: pid=3626510: Wed Nov 20 07:23:32 2024 00:25:10.379 read: IOPS=9517, BW=149MiB/s (156MB/s)(298MiB/2004msec) 00:25:10.379 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.60 00:25:10.379 clat (usec): min=2222, max=15783, avg=8284.88, stdev=2057.70 00:25:10.379 lat (usec): min=2225, max=15786, avg=8288.47, stdev=2057.84 00:25:10.379 clat percentiles (usec): 00:25:10.379 | 1.00th=[ 4146], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6390], 00:25:10.379 | 30.00th=[ 6980], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8717], 00:25:10.379 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[11076], 95.00th=[11731], 00:25:10.379 | 99.00th=[13173], 99.50th=[13829], 99.90th=[14877], 99.95th=[15270], 00:25:10.379 | 99.99th=[15795] 00:25:10.379 bw ( KiB/s): min=70144, max=83200, per=49.23%, avg=74968.00, stdev=5726.92, samples=4 00:25:10.379 iops : min= 4384, max= 5200, avg=4685.50, stdev=357.93, samples=4 00:25:10.379 write: IOPS=5595, BW=87.4MiB/s (91.7MB/s)(154MiB/1762msec); 0 zone resets 00:25:10.379 slat (usec): min=39, max=448, avg=41.00, stdev= 8.34 00:25:10.379 clat (usec): min=2436, max=15983, avg=9134.65, stdev=1426.27 00:25:10.379 lat (usec): min=2479, max=16116, avg=9175.65, stdev=1428.50 00:25:10.379 clat percentiles (usec): 00:25:10.379 | 1.00th=[ 5866], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 7963], 00:25:10.379 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:25:10.379 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10945], 95.00th=[11600], 00:25:10.379 | 99.00th=[12780], 99.50th=[13435], 99.90th=[15533], 99.95th=[15664], 00:25:10.379 | 99.99th=[15926] 00:25:10.379 bw ( KiB/s): min=73280, max=86528, per=87.50%, avg=78344.00, stdev=5752.43, samples=4 00:25:10.379 iops : min= 4580, max= 5408, avg=4896.50, stdev=359.53, samples=4 00:25:10.379 lat (msec) : 4=0.57%, 10=76.73%, 20=22.70% 00:25:10.379 cpu : usr=83.47%, sys=15.13%, ctx=19, majf=0, minf=25 00:25:10.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:10.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:10.379 issued rwts: total=19074,9860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.379 00:25:10.379 Run status group 0 (all jobs): 00:25:10.379 READ: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=298MiB (313MB), run=2004-2004msec 00:25:10.379 WRITE: bw=87.4MiB/s (91.7MB/s), 87.4MiB/s-87.4MiB/s (91.7MB/s-91.7MB/s), io=154MiB (162MB), run=1762-1762msec 00:25:10.379 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:10.379 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:10.379 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:10.379 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:10.379 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:10.379 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.379 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:10.379 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:10.379 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:10.379 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.379 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:10.379 rmmod nvme_tcp 00:25:10.379 rmmod nvme_fabrics 00:25:10.379 rmmod nvme_keyring 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3625135 ']' 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3625135 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 3625135 ']' 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 3625135 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3625135 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3625135' 00:25:10.640 killing process with pid 3625135 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 3625135 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 3625135 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.640 07:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.185 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:13.185 00:25:13.185 real 0m18.075s 00:25:13.185 user 1m3.618s 00:25:13.185 sys 0m7.937s 00:25:13.185 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:13.186 07:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.186 ************************************ 00:25:13.186 END TEST nvmf_fio_host 00:25:13.186 ************************************ 00:25:13.186 07:23:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:13.186 07:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:13.186 07:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:13.186 07:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.186 ************************************ 00:25:13.186 START TEST nvmf_failover 00:25:13.186 ************************************ 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:13.186 * Looking for test storage... 00:25:13.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:13.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.186 --rc genhtml_branch_coverage=1 00:25:13.186 --rc genhtml_function_coverage=1 00:25:13.186 --rc genhtml_legend=1 00:25:13.186 --rc geninfo_all_blocks=1 00:25:13.186 --rc geninfo_unexecuted_blocks=1 00:25:13.186 00:25:13.186 ' 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:13.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.186 --rc genhtml_branch_coverage=1 00:25:13.186 --rc genhtml_function_coverage=1 00:25:13.186 --rc genhtml_legend=1 00:25:13.186 --rc geninfo_all_blocks=1 00:25:13.186 --rc geninfo_unexecuted_blocks=1 00:25:13.186 00:25:13.186 ' 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:13.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.186 --rc genhtml_branch_coverage=1 00:25:13.186 --rc genhtml_function_coverage=1 00:25:13.186 --rc genhtml_legend=1 00:25:13.186 --rc geninfo_all_blocks=1 00:25:13.186 --rc geninfo_unexecuted_blocks=1 00:25:13.186 00:25:13.186 ' 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:13.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.186 --rc genhtml_branch_coverage=1 00:25:13.186 --rc genhtml_function_coverage=1 00:25:13.186 --rc genhtml_legend=1 00:25:13.186 --rc geninfo_all_blocks=1 00:25:13.186 --rc geninfo_unexecuted_blocks=1 00:25:13.186 00:25:13.186 ' 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.186 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:13.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:13.187 07:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:21.334 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:21.334 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:21.334 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:21.334 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:21.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:25:21.334 00:25:21.334 --- 10.0.0.2 ping statistics --- 00:25:21.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.334 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:25:21.334 00:25:21.334 --- 10.0.0.1 ping statistics --- 00:25:21.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.334 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:21.334 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3631167 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3631167 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3631167 ']' 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:21.335 07:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:21.335 [2024-11-20 07:23:42.840201] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:25:21.335 [2024-11-20 07:23:42.840268] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.335 [2024-11-20 07:23:42.942411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:21.335 [2024-11-20 07:23:42.994422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.335 [2024-11-20 07:23:42.994472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.335 [2024-11-20 07:23:42.994481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.335 [2024-11-20 07:23:42.994488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.335 [2024-11-20 07:23:42.994494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.335 [2024-11-20 07:23:42.996375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.335 [2024-11-20 07:23:42.996536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.335 [2024-11-20 07:23:42.996537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.597 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:21.597 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:21.597 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:21.597 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:21.597 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:21.597 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.597 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:21.597 [2024-11-20 07:23:43.863841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.858 07:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:21.858 Malloc0 00:25:21.858 07:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:22.119 07:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:22.380 07:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.641 [2024-11-20 07:23:44.677363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.641 07:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:22.641 [2024-11-20 07:23:44.865965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:22.641 07:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:22.903 [2024-11-20 07:23:45.062704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:22.903 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3631693 00:25:22.903 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:22.903 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:22.903 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3631693 /var/tmp/bdevperf.sock 00:25:22.903 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3631693 ']' 00:25:22.903 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:22.903 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:22.903 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:22.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:22.903 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:22.903 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:23.843 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:23.843 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:23.843 07:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:24.102 NVMe0n1 00:25:24.102 07:23:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:24.362 00:25:24.362 07:23:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:24.362 07:23:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3631889 00:25:24.362 07:23:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:25.304 07:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.564 [2024-11-20 07:23:47.678518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbed0 is same with the state(6) to be set 00:25:25.564 [2024-11-20 07:23:47.678556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbed0 is same with the state(6) to be set 00:25:25.564 [2024-11-20 07:23:47.678562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbed0 is same with the state(6) to be set 00:25:25.564 [2024-11-20 07:23:47.678567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbed0 is same with the state(6) to be set 00:25:25.564 [2024-11-20 07:23:47.678572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbed0 is same with the state(6) to be set 00:25:25.564 07:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:28.863 07:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:28.864 00:25:28.864 07:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:29.124 [2024-11-20 07:23:51.139385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.124 [2024-11-20 07:23:51.139420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.124 [2024-11-20 07:23:51.139426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.124 [2024-11-20 07:23:51.139431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.124 [2024-11-20 07:23:51.139435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 [2024-11-20 07:23:51.139594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccf0 is same with the state(6) to be set 00:25:29.125 07:23:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:32.420 07:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:32.420 [2024-11-20 07:23:54.330630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.420 07:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:33.359 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:33.359 [2024-11-20 07:23:55.520827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.359 [2024-11-20 07:23:55.520867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.359 [2024-11-20 07:23:55.520874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.359 [2024-11-20 07:23:55.520879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.359 [2024-11-20 07:23:55.520884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.359 [2024-11-20 07:23:55.520889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.359 [2024-11-20 07:23:55.520893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.359 [2024-11-20 07:23:55.520898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.359 [2024-11-20 07:23:55.520902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.359 [2024-11-20 07:23:55.520907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.359 [2024-11-20 07:23:55.520911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.359 [2024-11-20 07:23:55.520916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.359 [2024-11-20 07:23:55.520920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.359 [2024-11-20 07:23:55.520925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.520999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 [2024-11-20 07:23:55.521231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdbf0 is same with the state(6) to be set 00:25:33.360 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3631889 00:25:39.940 { 00:25:39.940 "results": [ 00:25:39.940 { 00:25:39.940 "job": "NVMe0n1", 00:25:39.940 "core_mask": "0x1", 00:25:39.940 "workload": "verify", 00:25:39.940 "status": "finished", 00:25:39.940 "verify_range": { 00:25:39.940 "start": 0, 00:25:39.940 "length": 16384 00:25:39.940 }, 00:25:39.940 "queue_depth": 128, 00:25:39.940 "io_size": 4096, 00:25:39.940 "runtime": 15.004324, 00:25:39.940 "iops": 12392.027791455317, 00:25:39.940 "mibps": 48.40635856037233, 00:25:39.940 "io_failed": 12565, 00:25:39.940 "io_timeout": 0, 00:25:39.940 "avg_latency_us": 9653.772290641264, 00:25:39.940 "min_latency_us": 419.84, 00:25:39.940 "max_latency_us": 19333.12 00:25:39.940 } 00:25:39.940 ], 00:25:39.940 "core_count": 1 00:25:39.940 } 00:25:39.940 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3631693 00:25:39.940 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3631693 ']' 00:25:39.940 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3631693 00:25:39.940 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:39.940 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:39.940 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3631693 00:25:39.940 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:39.940 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:39.940 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3631693' 00:25:39.940 killing process with pid 3631693 00:25:39.940 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3631693 00:25:39.940 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3631693 00:25:39.940 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:39.941 [2024-11-20 07:23:45.147382] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:25:39.941 [2024-11-20 07:23:45.147444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631693 ] 00:25:39.941 [2024-11-20 07:23:45.234340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.941 [2024-11-20 07:23:45.270168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.941 Running I/O for 15 seconds... 00:25:39.941 11088.00 IOPS, 43.31 MiB/s [2024-11-20T06:24:02.219Z] [2024-11-20 07:23:47.680598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.941 [2024-11-20 07:23:47.680633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.941 [2024-11-20 07:23:47.680651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.941 [2024-11-20 07:23:47.680667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.941 [2024-11-20 07:23:47.680683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b90d70 is same with the state(6) to be set 00:25:39.941 [2024-11-20 07:23:47.680759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.680770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.680792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.680809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.680826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.680843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.680860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.680884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.680901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.680919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.680936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.680953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.680971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.680988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.680998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.681006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.681023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.681040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.681057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.681073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.681090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.681109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.681125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.941 [2024-11-20 07:23:47.681142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.941 [2024-11-20 07:23:47.681164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.941 [2024-11-20 07:23:47.681181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.941 [2024-11-20 07:23:47.681198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.941 [2024-11-20 07:23:47.681215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.941 [2024-11-20 07:23:47.681232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.941 [2024-11-20 07:23:47.681248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.681265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.681283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.681299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.681322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.941 [2024-11-20 07:23:47.681331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.941 [2024-11-20 07:23:47.681339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.681988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.942 [2024-11-20 07:23:47.681995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.942 [2024-11-20 07:23:47.682004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.943 [2024-11-20 07:23:47.682661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.943 [2024-11-20 07:23:47.682668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:47.682685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:47.682701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:47.682718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:47.682734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:47.682750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.944 [2024-11-20 07:23:47.682767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.944 [2024-11-20 07:23:47.682784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.944 [2024-11-20 07:23:47.682800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.944 [2024-11-20 07:23:47.682818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.944 [2024-11-20 07:23:47.682835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.944 [2024-11-20 07:23:47.682852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.944 [2024-11-20 07:23:47.682868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.944 [2024-11-20 07:23:47.682886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.944 [2024-11-20 07:23:47.682903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.944 [2024-11-20 07:23:47.682921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.682940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.944 [2024-11-20 07:23:47.682947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.944 [2024-11-20 07:23:47.682954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97000 len:8 PRP1 0x0 PRP2 0x0 00:25:39.944 [2024-11-20 07:23:47.682961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:47.683006] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:39.944 [2024-11-20 07:23:47.683021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:39.944 [2024-11-20 07:23:47.686512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:39.944 [2024-11-20 07:23:47.686534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b90d70 (9): Bad file descriptor 00:25:39.944 [2024-11-20 07:23:47.717908] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:39.944 11008.00 IOPS, 43.00 MiB/s [2024-11-20T06:24:02.222Z] 11086.00 IOPS, 43.30 MiB/s [2024-11-20T06:24:02.222Z] 11430.00 IOPS, 44.65 MiB/s [2024-11-20T06:24:02.222Z] [2024-11-20 07:23:51.140753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.140988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.140993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.141000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.141006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.141012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.944 [2024-11-20 07:23:51.141017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.944 [2024-11-20 07:23:51.141024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.945 [2024-11-20 07:23:51.141268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.945 [2024-11-20 07:23:51.141280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.945 [2024-11-20 07:23:51.141291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.945 [2024-11-20 07:23:51.141302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.945 [2024-11-20 07:23:51.141314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.945 [2024-11-20 07:23:51.141325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.945 [2024-11-20 07:23:51.141336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.945 [2024-11-20 07:23:51.141348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.945 [2024-11-20 07:23:51.141360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.945 [2024-11-20 07:23:51.141371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.945 [2024-11-20 07:23:51.141383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.945 [2024-11-20 07:23:51.141394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.945 [2024-11-20 07:23:51.141403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.946 [2024-11-20 07:23:51.141408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.946 [2024-11-20 07:23:51.141419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.946 [2024-11-20 07:23:51.141431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.946 [2024-11-20 07:23:51.141442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.946 [2024-11-20 07:23:51.141454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.946 [2024-11-20 07:23:51.141653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.946 [2024-11-20 07:23:51.141757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.946 [2024-11-20 07:23:51.141768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.946 [2024-11-20 07:23:51.141780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.946 [2024-11-20 07:23:51.141791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.946 [2024-11-20 07:23:51.141803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.946 [2024-11-20 07:23:51.141814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.946 [2024-11-20 07:23:51.141826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.946 [2024-11-20 07:23:51.141856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.946 [2024-11-20 07:23:51.141861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.141867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.141872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.141878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.141883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.141889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.141894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.141900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.141906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.141912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.141918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.141924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.141929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.141936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.141941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.141947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.141952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.141959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.141964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.141970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.141975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.141982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.141988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.141994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.141999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.142011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.142022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.142033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.142045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.947 [2024-11-20 07:23:51.142056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.947 [2024-11-20 07:23:51.142078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40136 len:8 PRP1 0x0 PRP2 0x0 00:25:39.947 [2024-11-20 07:23:51.142083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.947 [2024-11-20 07:23:51.142095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.947 [2024-11-20 07:23:51.142100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40144 len:8 PRP1 0x0 PRP2 0x0 00:25:39.947 [2024-11-20 07:23:51.142105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.947 [2024-11-20 07:23:51.142114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.947 [2024-11-20 07:23:51.142118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40152 len:8 PRP1 0x0 PRP2 0x0 00:25:39.947 [2024-11-20 07:23:51.142123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.947 [2024-11-20 07:23:51.142132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.947 [2024-11-20 07:23:51.142136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40160 len:8 PRP1 0x0 PRP2 0x0 00:25:39.947 [2024-11-20 07:23:51.142142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.947 [2024-11-20 07:23:51.142153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.947 [2024-11-20 07:23:51.142162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40168 len:8 PRP1 0x0 PRP2 0x0 00:25:39.947 [2024-11-20 07:23:51.142168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.947 [2024-11-20 07:23:51.142177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.947 [2024-11-20 07:23:51.142182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40176 len:8 PRP1 0x0 PRP2 0x0 00:25:39.947 [2024-11-20 07:23:51.142187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.947 [2024-11-20 07:23:51.142197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.947 [2024-11-20 07:23:51.142201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40184 len:8 PRP1 0x0 PRP2 0x0 00:25:39.947 [2024-11-20 07:23:51.142206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.947 [2024-11-20 07:23:51.142216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.947 [2024-11-20 07:23:51.142220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40192 len:8 PRP1 0x0 PRP2 0x0 00:25:39.947 [2024-11-20 07:23:51.142225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.947 [2024-11-20 07:23:51.142235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.947 [2024-11-20 07:23:51.142239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40200 len:8 PRP1 0x0 PRP2 0x0 00:25:39.947 [2024-11-20 07:23:51.142244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.947 [2024-11-20 07:23:51.142253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.947 [2024-11-20 07:23:51.142258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40208 len:8 PRP1 0x0 PRP2 0x0 00:25:39.947 [2024-11-20 07:23:51.142263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.947 [2024-11-20 07:23:51.142273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.947 [2024-11-20 07:23:51.142277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39392 len:8 PRP1 0x0 PRP2 0x0 00:25:39.947 [2024-11-20 07:23:51.142282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.947 [2024-11-20 07:23:51.142292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.947 [2024-11-20 07:23:51.142296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39400 len:8 PRP1 0x0 PRP2 0x0 00:25:39.947 [2024-11-20 07:23:51.142303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.947 [2024-11-20 07:23:51.142312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.947 [2024-11-20 07:23:51.142316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39408 len:8 PRP1 0x0 PRP2 0x0 00:25:39.947 [2024-11-20 07:23:51.142321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.947 [2024-11-20 07:23:51.142326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.947 [2024-11-20 07:23:51.142330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.947 [2024-11-20 07:23:51.142335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39416 len:8 PRP1 0x0 PRP2 0x0 00:25:39.948 [2024-11-20 07:23:51.142340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:51.142345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.948 [2024-11-20 07:23:51.142349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.948 [2024-11-20 07:23:51.142353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39424 len:8 PRP1 0x0 PRP2 0x0 00:25:39.948 [2024-11-20 07:23:51.142358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:51.142364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.948 [2024-11-20 07:23:51.142368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.948 [2024-11-20 07:23:51.142372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39432 len:8 PRP1 0x0 PRP2 0x0 00:25:39.948 [2024-11-20 07:23:51.142377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:51.153412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.948 [2024-11-20 07:23:51.153434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.948 [2024-11-20 07:23:51.153443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39440 len:8 PRP1 0x0 PRP2 0x0 00:25:39.948 [2024-11-20 07:23:51.153451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:51.153457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.948 [2024-11-20 07:23:51.153461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.948 [2024-11-20 07:23:51.153466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39448 len:8 PRP1 0x0 PRP2 0x0 00:25:39.948 [2024-11-20 07:23:51.153471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:51.153477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.948 [2024-11-20 07:23:51.153481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.948 [2024-11-20 07:23:51.153485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40216 len:8 PRP1 0x0 PRP2 0x0 00:25:39.948 [2024-11-20 07:23:51.153490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:51.153524] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:39.948 [2024-11-20 07:23:51.153551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.948 [2024-11-20 07:23:51.153558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:51.153565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.948 [2024-11-20 07:23:51.153570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:51.153575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.948 [2024-11-20 07:23:51.153580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:51.153586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.948 [2024-11-20 07:23:51.153591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:51.153596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:39.948 [2024-11-20 07:23:51.153628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b90d70 (9): Bad file descriptor 00:25:39.948 [2024-11-20 07:23:51.156454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:39.948 [2024-11-20 07:23:51.339281] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:39.948 11233.80 IOPS, 43.88 MiB/s [2024-11-20T06:24:02.226Z] 11503.83 IOPS, 44.94 MiB/s [2024-11-20T06:24:02.226Z] 11713.86 IOPS, 45.76 MiB/s [2024-11-20T06:24:02.226Z] 11881.00 IOPS, 46.41 MiB/s [2024-11-20T06:24:02.226Z] [2024-11-20 07:23:55.522570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.948 [2024-11-20 07:23:55.522872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.948 [2024-11-20 07:23:55.522879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.522884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.522890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.522895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.522902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.522907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.522913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.522918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.522924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.522929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.522936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.522940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.522947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.522952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.522958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.522963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.522970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.522975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.522982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.522988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.522994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.949 [2024-11-20 07:23:55.523179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.949 [2024-11-20 07:23:55.523190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.949 [2024-11-20 07:23:55.523202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.949 [2024-11-20 07:23:55.523213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.949 [2024-11-20 07:23:55.523225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.949 [2024-11-20 07:23:55.523236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.949 [2024-11-20 07:23:55.523247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.949 [2024-11-20 07:23:55.523258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.949 [2024-11-20 07:23:55.523269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.949 [2024-11-20 07:23:55.523282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.949 [2024-11-20 07:23:55.523288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.949 [2024-11-20 07:23:55.523294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.950 [2024-11-20 07:23:55.523752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.950 [2024-11-20 07:23:55.523757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.951 [2024-11-20 07:23:55.523768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.951 [2024-11-20 07:23:55.523780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.951 [2024-11-20 07:23:55.523791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.951 [2024-11-20 07:23:55.523803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.951 [2024-11-20 07:23:55.523814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.951 [2024-11-20 07:23:55.523825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:27648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.951 [2024-11-20 07:23:55.523837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.951 [2024-11-20 07:23:55.523848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.951 [2024-11-20 07:23:55.523861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.951 [2024-11-20 07:23:55.523872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.951 [2024-11-20 07:23:55.523884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.951 [2024-11-20 07:23:55.523896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.951 [2024-11-20 07:23:55.523907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.523930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27704 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.523935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.523946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.523951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27712 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.523956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.523965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.523969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27720 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.523974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.523983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.523987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27728 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.523992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.523997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.524001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.524005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27736 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.524010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.524016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.524021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.524025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27744 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.524029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.524035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.524039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.524043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27752 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.524048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.524053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.524057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.524061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27760 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.524066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.524071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.524075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.524080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27768 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.524085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.524090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.524094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.524098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27776 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.524103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.524108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.524113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.524117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27784 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.524122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.524127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.524131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.524135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27792 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.524140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.524146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.524149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.524154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27800 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.524162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.524171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.524176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.524180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27808 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.524185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.536089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.536116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.536126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27816 len:8 PRP1 0x0 PRP2 0x0 00:25:39.951 [2024-11-20 07:23:55.536134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.951 [2024-11-20 07:23:55.536141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.951 [2024-11-20 07:23:55.536146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.951 [2024-11-20 07:23:55.536152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27824 len:8 PRP1 0x0 PRP2 0x0 00:25:39.952 [2024-11-20 07:23:55.536163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.952 [2024-11-20 07:23:55.536203] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:39.952 [2024-11-20 07:23:55.536231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.952 [2024-11-20 07:23:55.536239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.952 [2024-11-20 07:23:55.536248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.952 [2024-11-20 07:23:55.536255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.952 [2024-11-20 07:23:55.536262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.952 [2024-11-20 07:23:55.536268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.952 [2024-11-20 07:23:55.536275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.952 [2024-11-20 07:23:55.536282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.952 [2024-11-20 07:23:55.536288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:39.952 [2024-11-20 07:23:55.536325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b90d70 (9): Bad file descriptor 00:25:39.952 [2024-11-20 07:23:55.539340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:39.952 [2024-11-20 07:23:55.562568] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:39.952 11925.00 IOPS, 46.58 MiB/s [2024-11-20T06:24:02.230Z] 12043.80 IOPS, 47.05 MiB/s [2024-11-20T06:24:02.230Z] 12150.09 IOPS, 47.46 MiB/s [2024-11-20T06:24:02.230Z] 12221.92 IOPS, 47.74 MiB/s [2024-11-20T06:24:02.230Z] 12284.77 IOPS, 47.99 MiB/s [2024-11-20T06:24:02.230Z] 12333.07 IOPS, 48.18 MiB/s [2024-11-20T06:24:02.230Z] 12395.40 IOPS, 48.42 MiB/s 00:25:39.952 Latency(us) 00:25:39.952 [2024-11-20T06:24:02.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.952 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:39.952 Verification LBA range: start 0x0 length 0x4000 00:25:39.952 NVMe0n1 : 15.00 12392.03 48.41 837.43 0.00 9653.77 419.84 19333.12 00:25:39.952 [2024-11-20T06:24:02.230Z] =================================================================================================================== 00:25:39.952 [2024-11-20T06:24:02.230Z] Total : 12392.03 48.41 837.43 0.00 9653.77 419.84 19333.12 00:25:39.952 Received shutdown signal, test time was about 15.000000 seconds 00:25:39.952 00:25:39.952 Latency(us) 00:25:39.952 [2024-11-20T06:24:02.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.952 [2024-11-20T06:24:02.230Z] =================================================================================================================== 00:25:39.952 [2024-11-20T06:24:02.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.952 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:39.952 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:39.952 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:39.952 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3634981 00:25:39.952 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3634981 /var/tmp/bdevperf.sock 00:25:39.952 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:39.952 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3634981 ']' 00:25:39.952 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:39.952 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:39.952 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:39.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:39.952 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:39.952 07:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:40.520 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:40.520 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:40.520 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:40.780 [2024-11-20 07:24:02.846177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:40.780 07:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:40.780 [2024-11-20 07:24:03.030627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:41.039 07:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:41.299 NVMe0n1 00:25:41.299 07:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:41.558 00:25:41.558 07:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:41.817 00:25:41.817 07:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:41.817 07:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:41.817 07:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:42.077 07:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:45.371 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:45.371 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:45.371 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:45.371 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3636025 00:25:45.371 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3636025 00:25:46.312 { 00:25:46.312 "results": [ 00:25:46.312 { 00:25:46.312 "job": "NVMe0n1", 00:25:46.312 "core_mask": "0x1", 00:25:46.312 "workload": "verify", 00:25:46.312 "status": "finished", 00:25:46.312 "verify_range": { 00:25:46.312 "start": 0, 00:25:46.312 "length": 16384 00:25:46.312 }, 00:25:46.312 "queue_depth": 128, 00:25:46.312 "io_size": 4096, 00:25:46.312 "runtime": 1.044755, 00:25:46.312 "iops": 12334.949342190273, 00:25:46.312 "mibps": 48.183395867930756, 00:25:46.312 "io_failed": 0, 00:25:46.312 "io_timeout": 0, 00:25:46.312 "avg_latency_us": 9959.710956260831, 00:25:46.312 "min_latency_us": 1911.4666666666667, 00:25:46.312 "max_latency_us": 46093.653333333335 00:25:46.312 } 00:25:46.312 ], 00:25:46.312 "core_count": 1 00:25:46.312 } 00:25:46.312 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:46.312 [2024-11-20 07:24:01.895885] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:25:46.312 [2024-11-20 07:24:01.895945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634981 ] 00:25:46.312 [2024-11-20 07:24:01.980070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.312 [2024-11-20 07:24:02.008489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.312 [2024-11-20 07:24:04.192646] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:46.312 [2024-11-20 07:24:04.192684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.312 [2024-11-20 07:24:04.192694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.312 [2024-11-20 07:24:04.192700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.312 [2024-11-20 07:24:04.192706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.312 [2024-11-20 07:24:04.192712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.312 [2024-11-20 07:24:04.192717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.312 [2024-11-20 07:24:04.192722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.312 [2024-11-20 07:24:04.192727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.312 [2024-11-20 07:24:04.192737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:46.312 [2024-11-20 07:24:04.192757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:46.312 [2024-11-20 07:24:04.192768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf29d70 (9): Bad file descriptor 00:25:46.312 [2024-11-20 07:24:04.203925] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:46.312 Running I/O for 1 seconds... 00:25:46.312 12759.00 IOPS, 49.84 MiB/s 00:25:46.312 Latency(us) 00:25:46.312 [2024-11-20T06:24:08.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.312 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:46.312 Verification LBA range: start 0x0 length 0x4000 00:25:46.313 NVMe0n1 : 1.04 12334.95 48.18 0.00 0.00 9959.71 1911.47 46093.65 00:25:46.313 [2024-11-20T06:24:08.591Z] =================================================================================================================== 00:25:46.313 [2024-11-20T06:24:08.591Z] Total : 12334.95 48.18 0.00 0.00 9959.71 1911.47 46093.65 00:25:46.313 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:46.313 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:46.573 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:46.833 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:46.833 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:47.093 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:47.093 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:50.387 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:50.387 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:50.387 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3634981 00:25:50.387 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3634981 ']' 00:25:50.387 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3634981 00:25:50.387 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:50.387 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:50.387 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3634981 00:25:50.387 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:50.387 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:50.387 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3634981' 00:25:50.387 killing process with pid 3634981 00:25:50.387 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3634981 00:25:50.387 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3634981 00:25:50.647 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:50.647 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:50.647 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:50.647 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:50.647 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:50.647 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:50.647 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:50.647 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:50.647 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:50.647 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:50.647 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:50.647 rmmod nvme_tcp 00:25:50.647 rmmod nvme_fabrics 00:25:50.647 rmmod nvme_keyring 00:25:50.907 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:50.907 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:50.907 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:50.907 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3631167 ']' 00:25:50.907 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3631167 00:25:50.907 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3631167 ']' 00:25:50.907 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3631167 00:25:50.907 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:50.907 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:50.907 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3631167 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3631167' 00:25:50.907 killing process with pid 3631167 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3631167 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3631167 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.907 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:53.449 00:25:53.449 real 0m40.181s 00:25:53.449 user 2m3.339s 00:25:53.449 sys 0m8.702s 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:53.449 ************************************ 00:25:53.449 END TEST nvmf_failover 00:25:53.449 ************************************ 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.449 ************************************ 00:25:53.449 START TEST nvmf_host_discovery 00:25:53.449 ************************************ 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:53.449 * Looking for test storage... 00:25:53.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:53.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.449 --rc genhtml_branch_coverage=1 00:25:53.449 --rc genhtml_function_coverage=1 00:25:53.449 --rc genhtml_legend=1 00:25:53.449 --rc geninfo_all_blocks=1 00:25:53.449 --rc geninfo_unexecuted_blocks=1 00:25:53.449 00:25:53.449 ' 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:53.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.449 --rc genhtml_branch_coverage=1 00:25:53.449 --rc genhtml_function_coverage=1 00:25:53.449 --rc genhtml_legend=1 00:25:53.449 --rc geninfo_all_blocks=1 00:25:53.449 --rc geninfo_unexecuted_blocks=1 00:25:53.449 00:25:53.449 ' 00:25:53.449 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:53.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.449 --rc genhtml_branch_coverage=1 00:25:53.449 --rc genhtml_function_coverage=1 00:25:53.450 --rc genhtml_legend=1 00:25:53.450 --rc geninfo_all_blocks=1 00:25:53.450 --rc geninfo_unexecuted_blocks=1 00:25:53.450 00:25:53.450 ' 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:53.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.450 --rc genhtml_branch_coverage=1 00:25:53.450 --rc genhtml_function_coverage=1 00:25:53.450 --rc genhtml_legend=1 00:25:53.450 --rc geninfo_all_blocks=1 00:25:53.450 --rc geninfo_unexecuted_blocks=1 00:25:53.450 00:25:53.450 ' 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:53.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:53.450 07:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:01.583 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.583 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:01.584 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:01.584 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:01.584 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:01.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:26:01.584 00:26:01.584 --- 10.0.0.2 ping statistics --- 00:26:01.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.584 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:26:01.584 00:26:01.584 --- 10.0.0.1 ping statistics --- 00:26:01.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.584 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:01.584 07:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:01.584 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:01.584 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:01.584 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:01.584 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.584 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3641801 00:26:01.584 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3641801 00:26:01.584 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:01.584 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3641801 ']' 00:26:01.584 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.584 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:01.584 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.584 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:01.584 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.584 [2024-11-20 07:24:23.093829] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:26:01.584 [2024-11-20 07:24:23.093895] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.584 [2024-11-20 07:24:23.192532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.584 [2024-11-20 07:24:23.242844] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.584 [2024-11-20 07:24:23.242894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.584 [2024-11-20 07:24:23.242902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.584 [2024-11-20 07:24:23.242909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.584 [2024-11-20 07:24:23.242916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.584 [2024-11-20 07:24:23.243671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.844 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:01.844 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:26:01.844 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:01.844 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:01.844 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.845 [2024-11-20 07:24:23.955372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.845 [2024-11-20 07:24:23.967623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.845 null0 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.845 null1 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.845 07:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.845 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.845 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3642102 00:26:01.845 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:01.845 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3642102 /tmp/host.sock 00:26:01.845 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3642102 ']' 00:26:01.845 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:01.845 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:01.845 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:01.845 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:01.845 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:01.845 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.845 [2024-11-20 07:24:24.065686] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:26:01.845 [2024-11-20 07:24:24.065754] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642102 ] 00:26:02.155 [2024-11-20 07:24:24.157762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.155 [2024-11-20 07:24:24.211276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.819 07:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.819 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:02.819 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.820 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.081 [2024-11-20 07:24:25.258972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:03.081 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:26:03.342 07:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:26:03.914 [2024-11-20 07:24:25.965383] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:03.914 [2024-11-20 07:24:25.965414] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:03.914 [2024-11-20 07:24:25.965429] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:03.914 [2024-11-20 07:24:26.053702] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:04.175 [2024-11-20 07:24:26.236065] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:04.175 [2024-11-20 07:24:26.237283] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10667a0:1 started. 00:26:04.175 [2024-11-20 07:24:26.239205] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:04.175 [2024-11-20 07:24:26.239235] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:04.175 [2024-11-20 07:24:26.243541] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10667a0 was disconnected and freed. delete nvme_qpair. 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:04.436 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:04.437 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.699 [2024-11-20 07:24:26.711081] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10350a0:1 started. 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:04.699 [2024-11-20 07:24:26.714917] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10350a0 was disconnected and freed. delete nvme_qpair. 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.699 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.699 [2024-11-20 07:24:26.811645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:04.699 [2024-11-20 07:24:26.812462] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:04.700 [2024-11-20 07:24:26.812497] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:04.700 [2024-11-20 07:24:26.899749] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:04.700 07:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:26:04.960 [2024-11-20 07:24:27.162231] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:04.960 [2024-11-20 07:24:27.162268] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:04.960 [2024-11-20 07:24:27.162276] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:04.960 [2024-11-20 07:24:27.162281] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:05.904 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:05.904 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:05.904 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:05.904 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:05.904 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:05.904 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.904 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:05.904 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.904 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:05.904 07:24:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:05.904 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.905 [2024-11-20 07:24:28.063722] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:05.905 [2024-11-20 07:24:28.063743] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:05.905 [2024-11-20 07:24:28.070564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.905 [2024-11-20 07:24:28.070583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.905 [2024-11-20 07:24:28.070594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.905 [2024-11-20 07:24:28.070602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.905 [2024-11-20 07:24:28.070610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.905 [2024-11-20 07:24:28.070617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.905 [2024-11-20 07:24:28.070625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.905 [2024-11-20 07:24:28.070632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.905 [2024-11-20 07:24:28.070640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036e10 is same with the state(6) to be set 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:05.905 [2024-11-20 07:24:28.080671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036e10 (9): Bad file descriptor 00:26:05.905 [2024-11-20 07:24:28.090708] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:05.905 [2024-11-20 07:24:28.090720] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:05.905 [2024-11-20 07:24:28.090725] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:05.905 [2024-11-20 07:24:28.090734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:05.905 [2024-11-20 07:24:28.090751] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:05.905 [2024-11-20 07:24:28.090876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.905 [2024-11-20 07:24:28.090892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1036e10 with addr=10.0.0.2, port=4420 00:26:05.905 [2024-11-20 07:24:28.090900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036e10 is same with the state(6) to be set 00:26:05.905 [2024-11-20 07:24:28.090913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036e10 (9): Bad file descriptor 00:26:05.905 [2024-11-20 07:24:28.090930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:05.905 [2024-11-20 07:24:28.090937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:05.905 [2024-11-20 07:24:28.090945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:05.905 [2024-11-20 07:24:28.090952] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:05.905 [2024-11-20 07:24:28.090958] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:05.905 [2024-11-20 07:24:28.090963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.905 [2024-11-20 07:24:28.100783] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:05.905 [2024-11-20 07:24:28.100795] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:05.905 [2024-11-20 07:24:28.100800] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:05.905 [2024-11-20 07:24:28.100804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:05.905 [2024-11-20 07:24:28.100819] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:05.905 [2024-11-20 07:24:28.101106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.905 [2024-11-20 07:24:28.101118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1036e10 with addr=10.0.0.2, port=4420 00:26:05.905 [2024-11-20 07:24:28.101126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036e10 is same with the state(6) to be set 00:26:05.905 [2024-11-20 07:24:28.101142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036e10 (9): Bad file descriptor 00:26:05.905 [2024-11-20 07:24:28.101165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:05.905 [2024-11-20 07:24:28.101173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:05.905 [2024-11-20 07:24:28.101180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:05.905 [2024-11-20 07:24:28.101187] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:05.905 [2024-11-20 07:24:28.101193] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:05.905 [2024-11-20 07:24:28.101197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:05.905 [2024-11-20 07:24:28.110850] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:05.905 [2024-11-20 07:24:28.110866] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:05.905 [2024-11-20 07:24:28.110871] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:05.905 [2024-11-20 07:24:28.110876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:05.905 [2024-11-20 07:24:28.110891] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.905 [2024-11-20 07:24:28.111185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.905 [2024-11-20 07:24:28.111199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1036e10 with addr=10.0.0.2, port=4420 00:26:05.905 [2024-11-20 07:24:28.111207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036e10 is same with the state(6) to be set 00:26:05.905 [2024-11-20 07:24:28.111218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036e10 (9): Bad file descriptor 00:26:05.905 [2024-11-20 07:24:28.111235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:05.905 [2024-11-20 07:24:28.111242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:05.905 [2024-11-20 07:24:28.111249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:05.905 [2024-11-20 07:24:28.111263] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:05.905 [2024-11-20 07:24:28.111268] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:05.905 [2024-11-20 07:24:28.111272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:05.905 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:05.905 [2024-11-20 07:24:28.120924] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:05.905 [2024-11-20 07:24:28.120937] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:05.905 [2024-11-20 07:24:28.120942] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:05.905 [2024-11-20 07:24:28.120946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:05.905 [2024-11-20 07:24:28.120961] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:05.905 [2024-11-20 07:24:28.121390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.905 [2024-11-20 07:24:28.121428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1036e10 with addr=10.0.0.2, port=4420 00:26:05.906 [2024-11-20 07:24:28.121439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036e10 is same with the state(6) to be set 00:26:05.906 [2024-11-20 07:24:28.121458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036e10 (9): Bad file descriptor 00:26:05.906 [2024-11-20 07:24:28.121471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:05.906 [2024-11-20 07:24:28.121478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:05.906 [2024-11-20 07:24:28.121486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:05.906 [2024-11-20 07:24:28.121494] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:05.906 [2024-11-20 07:24:28.121499] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:05.906 [2024-11-20 07:24:28.121504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:05.906 [2024-11-20 07:24:28.130994] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:05.906 [2024-11-20 07:24:28.131008] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:05.906 [2024-11-20 07:24:28.131013] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:05.906 [2024-11-20 07:24:28.131018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:05.906 [2024-11-20 07:24:28.131034] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:05.906 [2024-11-20 07:24:28.131464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.906 [2024-11-20 07:24:28.131502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1036e10 with addr=10.0.0.2, port=4420 00:26:05.906 [2024-11-20 07:24:28.131513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036e10 is same with the state(6) to be set 00:26:05.906 [2024-11-20 07:24:28.131532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036e10 (9): Bad file descriptor 00:26:05.906 [2024-11-20 07:24:28.131558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:05.906 [2024-11-20 07:24:28.131571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:05.906 [2024-11-20 07:24:28.131579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:05.906 [2024-11-20 07:24:28.131586] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:05.906 [2024-11-20 07:24:28.131592] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:05.906 [2024-11-20 07:24:28.131596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:05.906 [2024-11-20 07:24:28.141067] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:05.906 [2024-11-20 07:24:28.141081] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:05.906 [2024-11-20 07:24:28.141086] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:05.906 [2024-11-20 07:24:28.141091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:05.906 [2024-11-20 07:24:28.141107] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:05.906 [2024-11-20 07:24:28.141538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.906 [2024-11-20 07:24:28.141576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1036e10 with addr=10.0.0.2, port=4420 00:26:05.906 [2024-11-20 07:24:28.141588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036e10 is same with the state(6) to be set 00:26:05.906 [2024-11-20 07:24:28.141607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036e10 (9): Bad file descriptor 00:26:05.906 [2024-11-20 07:24:28.141632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:05.906 [2024-11-20 07:24:28.141640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:05.906 [2024-11-20 07:24:28.141648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:05.906 [2024-11-20 07:24:28.141656] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:05.906 [2024-11-20 07:24:28.141661] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:05.906 [2024-11-20 07:24:28.141666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.906 [2024-11-20 07:24:28.151140] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:05.906 [2024-11-20 07:24:28.151155] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:05.906 [2024-11-20 07:24:28.151165] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:05.906 [2024-11-20 07:24:28.151170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:05.906 [2024-11-20 07:24:28.151187] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:05.906 [2024-11-20 07:24:28.151493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.906 [2024-11-20 07:24:28.151506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1036e10 with addr=10.0.0.2, port=4420 00:26:05.906 [2024-11-20 07:24:28.151514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036e10 is same with the state(6) to be set 00:26:05.906 [2024-11-20 07:24:28.151525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036e10 (9): Bad file descriptor 00:26:05.906 [2024-11-20 07:24:28.151541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:05.906 [2024-11-20 07:24:28.151547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:05.906 [2024-11-20 07:24:28.151555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:05.906 [2024-11-20 07:24:28.151561] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:05.906 [2024-11-20 07:24:28.151566] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:05.906 [2024-11-20 07:24:28.151570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:05.906 [2024-11-20 07:24:28.161219] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:05.906 [2024-11-20 07:24:28.161232] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:05.906 [2024-11-20 07:24:28.161237] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:05.906 [2024-11-20 07:24:28.161241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:05.906 [2024-11-20 07:24:28.161256] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:05.906 [2024-11-20 07:24:28.161456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.906 [2024-11-20 07:24:28.161468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1036e10 with addr=10.0.0.2, port=4420 00:26:05.906 [2024-11-20 07:24:28.161475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036e10 is same with the state(6) to be set 00:26:05.906 [2024-11-20 07:24:28.161486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036e10 (9): Bad file descriptor 00:26:05.906 [2024-11-20 07:24:28.161496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:05.906 [2024-11-20 07:24:28.161503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:05.906 [2024-11-20 07:24:28.161510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:05.906 [2024-11-20 07:24:28.161516] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:05.906 [2024-11-20 07:24:28.161521] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:05.906 [2024-11-20 07:24:28.161526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.906 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:05.906 [2024-11-20 07:24:28.171287] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:05.906 [2024-11-20 07:24:28.171301] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:05.906 [2024-11-20 07:24:28.171306] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:05.906 [2024-11-20 07:24:28.171310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:05.906 [2024-11-20 07:24:28.171325] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:05.906 [2024-11-20 07:24:28.171625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.906 [2024-11-20 07:24:28.171637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1036e10 with addr=10.0.0.2, port=4420 00:26:05.907 [2024-11-20 07:24:28.171645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036e10 is same with the state(6) to be set 00:26:05.907 [2024-11-20 07:24:28.171656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036e10 (9): Bad file descriptor 00:26:05.907 [2024-11-20 07:24:28.171666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:05.907 [2024-11-20 07:24:28.171673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:05.907 [2024-11-20 07:24:28.171680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:05.907 [2024-11-20 07:24:28.171686] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:05.907 [2024-11-20 07:24:28.171691] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:05.907 [2024-11-20 07:24:28.171695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:06.167 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.167 [2024-11-20 07:24:28.181357] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:06.167 [2024-11-20 07:24:28.181369] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:06.167 [2024-11-20 07:24:28.181374] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:06.167 [2024-11-20 07:24:28.181379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:06.167 [2024-11-20 07:24:28.181392] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:06.167 [2024-11-20 07:24:28.181673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.167 [2024-11-20 07:24:28.181684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1036e10 with addr=10.0.0.2, port=4420 00:26:06.167 [2024-11-20 07:24:28.181692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036e10 is same with the state(6) to be set 00:26:06.167 [2024-11-20 07:24:28.181703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036e10 (9): Bad file descriptor 00:26:06.167 [2024-11-20 07:24:28.181713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:06.167 [2024-11-20 07:24:28.181724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:06.167 [2024-11-20 07:24:28.181731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:06.167 [2024-11-20 07:24:28.181737] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:06.167 [2024-11-20 07:24:28.181742] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:06.167 [2024-11-20 07:24:28.181746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:06.167 [2024-11-20 07:24:28.191423] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:06.167 [2024-11-20 07:24:28.191435] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:06.167 [2024-11-20 07:24:28.191439] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:06.167 [2024-11-20 07:24:28.191444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:06.167 [2024-11-20 07:24:28.191457] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:06.167 [2024-11-20 07:24:28.191756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.167 [2024-11-20 07:24:28.191767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1036e10 with addr=10.0.0.2, port=4420 00:26:06.167 [2024-11-20 07:24:28.191775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036e10 is same with the state(6) to be set 00:26:06.167 [2024-11-20 07:24:28.191786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036e10 (9): Bad file descriptor 00:26:06.167 [2024-11-20 07:24:28.191796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:06.167 [2024-11-20 07:24:28.191802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:06.167 [2024-11-20 07:24:28.191809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:06.167 [2024-11-20 07:24:28.191815] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:06.167 [2024-11-20 07:24:28.191820] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:06.167 [2024-11-20 07:24:28.191824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:06.167 [2024-11-20 07:24:28.191852] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:06.168 [2024-11-20 07:24:28.191868] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:06.168 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:26:06.168 07:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.111 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:07.112 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:07.112 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:07.112 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:07.112 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:07.112 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:07.112 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:07.112 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:07.112 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.112 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.112 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:07.112 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:07.112 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.372 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:26:07.372 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:07.372 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:07.372 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:07.372 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:07.372 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:07.372 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:07.372 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.373 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.314 [2024-11-20 07:24:30.547345] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:08.314 [2024-11-20 07:24:30.547359] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:08.314 [2024-11-20 07:24:30.547368] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:08.574 [2024-11-20 07:24:30.634618] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:08.833 [2024-11-20 07:24:30.861807] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:08.833 [2024-11-20 07:24:30.862502] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x119e9a0:1 started. 00:26:08.833 [2024-11-20 07:24:30.863889] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:08.833 [2024-11-20 07:24:30.863912] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.833 [2024-11-20 07:24:30.874663] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x119e9a0 was disconnected and freed. delete nvme_qpair. 00:26:08.833 request: 00:26:08.833 { 00:26:08.833 "name": "nvme", 00:26:08.833 "trtype": "tcp", 00:26:08.833 "traddr": "10.0.0.2", 00:26:08.833 "adrfam": "ipv4", 00:26:08.833 "trsvcid": "8009", 00:26:08.833 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:08.833 "wait_for_attach": true, 00:26:08.833 "method": "bdev_nvme_start_discovery", 00:26:08.833 "req_id": 1 00:26:08.833 } 00:26:08.833 Got JSON-RPC error response 00:26:08.833 response: 00:26:08.833 { 00:26:08.833 "code": -17, 00:26:08.833 "message": "File exists" 00:26:08.833 } 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:08.833 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.834 07:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.834 request: 00:26:08.834 { 00:26:08.834 "name": "nvme_second", 00:26:08.834 "trtype": "tcp", 00:26:08.834 "traddr": "10.0.0.2", 00:26:08.834 "adrfam": "ipv4", 00:26:08.834 "trsvcid": "8009", 00:26:08.834 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:08.834 "wait_for_attach": true, 00:26:08.834 "method": "bdev_nvme_start_discovery", 00:26:08.834 "req_id": 1 00:26:08.834 } 00:26:08.834 Got JSON-RPC error response 00:26:08.834 response: 00:26:08.834 { 00:26:08.834 "code": -17, 00:26:08.834 "message": "File exists" 00:26:08.834 } 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:08.834 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.108 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:09.108 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:09.108 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:09.108 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:09.108 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:09.108 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:09.108 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:09.108 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:09.108 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:09.108 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.108 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.047 [2024-11-20 07:24:32.123340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.047 [2024-11-20 07:24:32.123365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1038220 with addr=10.0.0.2, port=8010 00:26:10.047 [2024-11-20 07:24:32.123375] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:10.047 [2024-11-20 07:24:32.123381] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:10.047 [2024-11-20 07:24:32.123387] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:10.989 [2024-11-20 07:24:33.125631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.989 [2024-11-20 07:24:33.125649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1038220 with addr=10.0.0.2, port=8010 00:26:10.989 [2024-11-20 07:24:33.125658] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:10.989 [2024-11-20 07:24:33.125663] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:10.989 [2024-11-20 07:24:33.125671] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:11.930 [2024-11-20 07:24:34.127673] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:11.930 request: 00:26:11.930 { 00:26:11.930 "name": "nvme_second", 00:26:11.930 "trtype": "tcp", 00:26:11.930 "traddr": "10.0.0.2", 00:26:11.930 "adrfam": "ipv4", 00:26:11.930 "trsvcid": "8010", 00:26:11.930 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:11.930 "wait_for_attach": false, 00:26:11.930 "attach_timeout_ms": 3000, 00:26:11.930 "method": "bdev_nvme_start_discovery", 00:26:11.930 "req_id": 1 00:26:11.930 } 00:26:11.930 Got JSON-RPC error response 00:26:11.930 response: 00:26:11.930 { 00:26:11.930 "code": -110, 00:26:11.930 "message": "Connection timed out" 00:26:11.930 } 00:26:11.930 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:11.930 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:11.930 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:11.930 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:11.930 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:11.930 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3642102 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:11.931 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:11.931 rmmod nvme_tcp 00:26:12.190 rmmod nvme_fabrics 00:26:12.190 rmmod nvme_keyring 00:26:12.190 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:12.190 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:12.190 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:12.190 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3641801 ']' 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3641801 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 3641801 ']' 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 3641801 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3641801 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3641801' 00:26:12.191 killing process with pid 3641801 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 3641801 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 3641801 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.191 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:14.733 00:26:14.733 real 0m21.215s 00:26:14.733 user 0m25.293s 00:26:14.733 sys 0m7.381s 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.733 ************************************ 00:26:14.733 END TEST nvmf_host_discovery 00:26:14.733 ************************************ 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.733 ************************************ 00:26:14.733 START TEST nvmf_host_multipath_status 00:26:14.733 ************************************ 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:14.733 * Looking for test storage... 00:26:14.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.733 --rc genhtml_branch_coverage=1 00:26:14.733 --rc genhtml_function_coverage=1 00:26:14.733 --rc genhtml_legend=1 00:26:14.733 --rc geninfo_all_blocks=1 00:26:14.733 --rc geninfo_unexecuted_blocks=1 00:26:14.733 00:26:14.733 ' 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.733 --rc genhtml_branch_coverage=1 00:26:14.733 --rc genhtml_function_coverage=1 00:26:14.733 --rc genhtml_legend=1 00:26:14.733 --rc geninfo_all_blocks=1 00:26:14.733 --rc geninfo_unexecuted_blocks=1 00:26:14.733 00:26:14.733 ' 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.733 --rc genhtml_branch_coverage=1 00:26:14.733 --rc genhtml_function_coverage=1 00:26:14.733 --rc genhtml_legend=1 00:26:14.733 --rc geninfo_all_blocks=1 00:26:14.733 --rc geninfo_unexecuted_blocks=1 00:26:14.733 00:26:14.733 ' 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.733 --rc genhtml_branch_coverage=1 00:26:14.733 --rc genhtml_function_coverage=1 00:26:14.733 --rc genhtml_legend=1 00:26:14.733 --rc geninfo_all_blocks=1 00:26:14.733 --rc geninfo_unexecuted_blocks=1 00:26:14.733 00:26:14.733 ' 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:14.733 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:14.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:14.734 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:22.878 07:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.878 07:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:22.878 07:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:22.878 07:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:22.878 07:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:22.878 07:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:22.878 07:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:22.878 07:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:22.878 07:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:22.878 07:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:22.878 07:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:22.878 07:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:22.878 07:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:22.878 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:22.878 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:22.878 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.878 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.878 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.878 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.878 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:22.879 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:22.879 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:22.879 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:22.879 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:22.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:26:22.879 00:26:22.879 --- 10.0.0.2 ping statistics --- 00:26:22.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.879 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:26:22.879 00:26:22.879 --- 10.0.0.1 ping statistics --- 00:26:22.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.879 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3648349 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3648349 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3648349 ']' 00:26:22.879 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.880 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:22.880 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.880 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:22.880 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:22.880 [2024-11-20 07:24:44.428392] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:26:22.880 [2024-11-20 07:24:44.428462] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.880 [2024-11-20 07:24:44.529213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:22.880 [2024-11-20 07:24:44.580005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.880 [2024-11-20 07:24:44.580056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.880 [2024-11-20 07:24:44.580064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.880 [2024-11-20 07:24:44.580071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.880 [2024-11-20 07:24:44.580078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.880 [2024-11-20 07:24:44.581710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.880 [2024-11-20 07:24:44.581714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.141 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:23.141 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:23.141 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:23.141 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:23.141 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:23.141 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.141 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3648349 00:26:23.141 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:23.403 [2024-11-20 07:24:45.436308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:23.403 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:23.403 Malloc0 00:26:23.664 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:23.664 07:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:23.925 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.186 [2024-11-20 07:24:46.258440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.186 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:24.186 [2024-11-20 07:24:46.450945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:24.447 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3648716 00:26:24.447 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:24.447 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:24.447 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3648716 /var/tmp/bdevperf.sock 00:26:24.447 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3648716 ']' 00:26:24.448 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:24.448 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:24.448 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:24.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:24.448 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:24.448 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:25.394 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:25.394 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:25.394 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:25.394 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:25.967 Nvme0n1 00:26:25.967 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:26.227 Nvme0n1 00:26:26.489 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:26.489 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:28.410 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:28.410 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:28.670 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:28.670 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:30.055 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:30.055 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:30.055 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.055 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:30.055 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.055 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:30.055 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.055 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:30.055 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.055 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:30.055 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.055 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:30.316 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.316 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:30.316 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.316 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:30.577 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.577 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:30.577 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.577 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:30.577 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.577 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:30.577 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.577 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.837 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.837 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:30.837 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:31.097 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:31.097 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:32.484 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:32.484 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:32.484 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.484 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:32.484 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.484 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:32.484 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.484 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:32.484 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.484 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:32.484 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.484 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:32.744 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.744 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:32.744 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.744 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:33.004 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.004 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:33.004 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.004 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:33.004 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.004 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:33.004 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.004 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:33.265 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.265 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:33.265 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:33.526 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:33.787 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:34.727 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:34.727 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:34.727 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.727 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:34.987 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.987 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:34.987 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:34.987 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.987 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:34.987 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:34.987 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.987 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:35.247 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.247 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:35.247 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.247 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:35.507 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.507 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:35.507 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.507 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:35.507 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.507 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:35.507 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.507 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:35.767 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.767 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:35.767 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:36.027 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:36.287 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:37.228 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:37.228 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:37.228 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.228 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:37.228 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.228 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:37.489 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.489 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.489 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:37.489 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:37.489 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.489 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:37.750 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.750 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:37.750 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.750 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:38.011 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.011 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:38.011 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.011 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:38.011 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.011 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:38.011 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.011 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:38.271 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.271 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:38.271 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:38.532 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:38.532 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:39.915 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:39.915 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:39.915 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.915 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:39.915 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.915 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:39.915 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.915 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:39.915 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.915 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:39.915 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.915 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.177 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.177 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.177 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.177 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.439 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.439 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:40.439 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.439 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:40.439 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.439 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:40.439 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.439 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:40.700 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.700 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:40.700 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:40.960 07:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:41.221 07:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:42.163 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:42.163 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:42.163 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.163 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:42.425 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.425 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:42.425 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.425 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:42.425 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.425 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:42.425 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.425 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:42.686 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.686 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:42.686 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.686 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:42.947 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.947 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:42.947 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.947 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:42.947 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.947 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:42.947 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:42.947 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.207 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.207 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:43.467 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:43.467 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:43.467 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:43.728 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:44.672 07:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:44.672 07:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:44.672 07:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.672 07:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:44.931 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.931 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:44.931 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.931 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:45.192 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.192 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:45.192 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.192 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:45.452 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.452 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:45.452 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.452 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:45.452 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.452 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:45.452 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.452 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:45.712 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.712 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:45.712 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.712 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:45.974 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.974 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:45.974 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:46.235 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:46.235 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:47.620 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:47.621 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:47.621 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.621 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:47.621 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:47.621 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:47.621 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.621 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:47.621 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.621 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:47.621 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.621 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:47.882 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.882 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:47.882 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.882 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:48.142 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.142 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:48.142 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.142 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:48.142 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.142 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:48.142 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:48.142 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.403 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.403 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:48.403 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:48.664 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:48.664 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:49.717 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:49.717 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:49.717 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.717 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:50.036 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.036 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:50.037 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.037 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:50.037 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.037 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:50.037 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.037 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:50.298 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.298 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:50.298 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.298 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:50.561 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.561 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:50.561 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.561 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:50.822 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.822 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:50.822 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.822 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:50.822 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.822 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:50.822 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:51.083 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:51.344 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:52.286 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:52.286 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:52.286 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.287 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:52.548 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.548 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:52.548 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.548 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:52.548 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:52.548 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:52.548 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.548 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:52.810 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.810 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:52.810 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.810 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:53.072 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.072 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:53.072 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.072 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:53.072 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.072 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:53.072 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.072 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:53.333 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.333 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3648716 00:26:53.333 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3648716 ']' 00:26:53.333 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3648716 00:26:53.333 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:53.333 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:53.333 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3648716 00:26:53.333 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:53.333 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:53.333 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3648716' 00:26:53.333 killing process with pid 3648716 00:26:53.333 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3648716 00:26:53.333 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3648716 00:26:53.333 { 00:26:53.334 "results": [ 00:26:53.334 { 00:26:53.334 "job": "Nvme0n1", 00:26:53.334 "core_mask": "0x4", 00:26:53.334 "workload": "verify", 00:26:53.334 "status": "terminated", 00:26:53.334 "verify_range": { 00:26:53.334 "start": 0, 00:26:53.334 "length": 16384 00:26:53.334 }, 00:26:53.334 "queue_depth": 128, 00:26:53.334 "io_size": 4096, 00:26:53.334 "runtime": 26.937522, 00:26:53.334 "iops": 11934.728071869416, 00:26:53.334 "mibps": 46.62003153073991, 00:26:53.334 "io_failed": 0, 00:26:53.334 "io_timeout": 0, 00:26:53.334 "avg_latency_us": 10706.558086256164, 00:26:53.334 "min_latency_us": 866.9866666666667, 00:26:53.334 "max_latency_us": 3019898.88 00:26:53.334 } 00:26:53.334 ], 00:26:53.334 "core_count": 1 00:26:53.334 } 00:26:53.597 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3648716 00:26:53.597 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:53.597 [2024-11-20 07:24:46.531991] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:26:53.597 [2024-11-20 07:24:46.532072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648716 ] 00:26:53.597 [2024-11-20 07:24:46.626106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.597 [2024-11-20 07:24:46.677020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.597 Running I/O for 90 seconds... 00:26:53.597 11061.00 IOPS, 43.21 MiB/s [2024-11-20T06:25:15.875Z] 11147.00 IOPS, 43.54 MiB/s [2024-11-20T06:25:15.875Z] 11176.67 IOPS, 43.66 MiB/s [2024-11-20T06:25:15.875Z] 11568.25 IOPS, 45.19 MiB/s [2024-11-20T06:25:15.875Z] 11851.60 IOPS, 46.30 MiB/s [2024-11-20T06:25:15.875Z] 12018.67 IOPS, 46.95 MiB/s [2024-11-20T06:25:15.875Z] 12139.57 IOPS, 47.42 MiB/s [2024-11-20T06:25:15.875Z] 12246.25 IOPS, 47.84 MiB/s [2024-11-20T06:25:15.875Z] 12317.00 IOPS, 48.11 MiB/s [2024-11-20T06:25:15.875Z] 12374.00 IOPS, 48.34 MiB/s [2024-11-20T06:25:15.875Z] 12425.27 IOPS, 48.54 MiB/s [2024-11-20T06:25:15.875Z] [2024-11-20 07:25:00.559153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.597 [2024-11-20 07:25:00.559190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:53.597 [2024-11-20 07:25:00.559222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.597 [2024-11-20 07:25:00.559229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:53.597 [2024-11-20 07:25:00.560131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.597 [2024-11-20 07:25:00.560138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:53.597 [2024-11-20 07:25:00.560151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.597 [2024-11-20 07:25:00.560156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:53.597 [2024-11-20 07:25:00.560172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.597 [2024-11-20 07:25:00.560177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:53.597 [2024-11-20 07:25:00.560189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.597 [2024-11-20 07:25:00.560194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:53.597 [2024-11-20 07:25:00.560207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.597 [2024-11-20 07:25:00.560212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:53.597 [2024-11-20 07:25:00.560225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.597 [2024-11-20 07:25:00.560231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:53.597 [2024-11-20 07:25:00.560457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.597 [2024-11-20 07:25:00.560468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:53.597 [2024-11-20 07:25:00.560482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.560989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.560994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.561006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.561011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.561024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.561029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.561042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.561047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.561059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.561064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.561077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.561083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.561095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.598 [2024-11-20 07:25:00.561100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:53.598 [2024-11-20 07:25:00.561113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.599 [2024-11-20 07:25:00.561958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:53.599 [2024-11-20 07:25:00.561974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:00.561979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:00.561995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:00.562000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:00.562015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:00.562021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:00.562036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:00.562042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:00.562057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:00.562064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:00.562079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:00.562085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:00.562100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:00.562105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:00.562121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:00.562126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:00.562142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:00.562147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:53.600 12381.67 IOPS, 48.37 MiB/s [2024-11-20T06:25:15.878Z] 11429.23 IOPS, 44.65 MiB/s [2024-11-20T06:25:15.878Z] 10612.86 IOPS, 41.46 MiB/s [2024-11-20T06:25:15.878Z] 9964.73 IOPS, 38.92 MiB/s [2024-11-20T06:25:15.878Z] 10151.81 IOPS, 39.66 MiB/s [2024-11-20T06:25:15.878Z] 10310.24 IOPS, 40.27 MiB/s [2024-11-20T06:25:15.878Z] 10647.72 IOPS, 41.59 MiB/s [2024-11-20T06:25:15.878Z] 10963.16 IOPS, 42.82 MiB/s [2024-11-20T06:25:15.878Z] 11195.30 IOPS, 43.73 MiB/s [2024-11-20T06:25:15.878Z] 11270.24 IOPS, 44.02 MiB/s [2024-11-20T06:25:15.878Z] 11334.27 IOPS, 44.27 MiB/s [2024-11-20T06:25:15.878Z] 11509.30 IOPS, 44.96 MiB/s [2024-11-20T06:25:15.878Z] 11720.33 IOPS, 45.78 MiB/s [2024-11-20T06:25:15.878Z] [2024-11-20 07:25:13.363009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.600 [2024-11-20 07:25:13.363044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.363074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.600 [2024-11-20 07:25:13.363081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.364769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.364785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.364799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.364804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.364815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.364820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.364831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.364836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.364851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.364856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.364866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.364872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.364882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.364887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.364898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.364903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.364913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.364918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.364928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.364934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.364944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.364949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.364959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.364964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.364974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.364979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.364989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.364994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.365005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.365010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.365020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.365026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.365036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.365042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.365052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.365057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.365067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.365073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.365083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.365089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.365099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.600 [2024-11-20 07:25:13.365104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:53.600 [2024-11-20 07:25:13.365114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.600 [2024-11-20 07:25:13.365119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.601 [2024-11-20 07:25:13.365135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.601 [2024-11-20 07:25:13.365416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.601 [2024-11-20 07:25:13.365421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.601 11865.68 IOPS, 46.35 MiB/s [2024-11-20T06:25:15.879Z] 11906.04 IOPS, 46.51 MiB/s [2024-11-20T06:25:15.879Z] Received shutdown signal, test time was about 26.938130 seconds 00:26:53.601 00:26:53.601 Latency(us) 00:26:53.601 [2024-11-20T06:25:15.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.601 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:53.601 Verification LBA range: start 0x0 length 0x4000 00:26:53.601 Nvme0n1 : 26.94 11934.73 46.62 0.00 0.00 10706.56 866.99 3019898.88 00:26:53.601 [2024-11-20T06:25:15.879Z] =================================================================================================================== 00:26:53.601 [2024-11-20T06:25:15.879Z] Total : 11934.73 46.62 0.00 0.00 10706.56 866.99 3019898.88 00:26:53.601 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.863 rmmod nvme_tcp 00:26:53.863 rmmod nvme_fabrics 00:26:53.863 rmmod nvme_keyring 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3648349 ']' 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3648349 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3648349 ']' 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3648349 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:53.863 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3648349 00:26:53.863 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:53.863 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:53.863 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3648349' 00:26:53.863 killing process with pid 3648349 00:26:53.863 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3648349 00:26:53.863 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3648349 00:26:53.863 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:53.863 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:53.863 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:53.863 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:53.863 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:53.863 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:53.863 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:54.124 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:54.124 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:54.124 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.124 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.124 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.038 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:56.038 00:26:56.038 real 0m41.636s 00:26:56.038 user 1m47.914s 00:26:56.038 sys 0m11.572s 00:26:56.038 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:56.038 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:56.038 ************************************ 00:26:56.038 END TEST nvmf_host_multipath_status 00:26:56.038 ************************************ 00:26:56.038 07:25:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:56.038 07:25:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:56.038 07:25:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:56.038 07:25:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.038 ************************************ 00:26:56.038 START TEST nvmf_discovery_remove_ifc 00:26:56.038 ************************************ 00:26:56.038 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:56.299 * Looking for test storage... 00:26:56.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:56.299 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:56.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.300 --rc genhtml_branch_coverage=1 00:26:56.300 --rc genhtml_function_coverage=1 00:26:56.300 --rc genhtml_legend=1 00:26:56.300 --rc geninfo_all_blocks=1 00:26:56.300 --rc geninfo_unexecuted_blocks=1 00:26:56.300 00:26:56.300 ' 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:56.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.300 --rc genhtml_branch_coverage=1 00:26:56.300 --rc genhtml_function_coverage=1 00:26:56.300 --rc genhtml_legend=1 00:26:56.300 --rc geninfo_all_blocks=1 00:26:56.300 --rc geninfo_unexecuted_blocks=1 00:26:56.300 00:26:56.300 ' 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:56.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.300 --rc genhtml_branch_coverage=1 00:26:56.300 --rc genhtml_function_coverage=1 00:26:56.300 --rc genhtml_legend=1 00:26:56.300 --rc geninfo_all_blocks=1 00:26:56.300 --rc geninfo_unexecuted_blocks=1 00:26:56.300 00:26:56.300 ' 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:56.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.300 --rc genhtml_branch_coverage=1 00:26:56.300 --rc genhtml_function_coverage=1 00:26:56.300 --rc genhtml_legend=1 00:26:56.300 --rc geninfo_all_blocks=1 00:26:56.300 --rc geninfo_unexecuted_blocks=1 00:26:56.300 00:26:56.300 ' 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:56.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.300 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:56.301 07:25:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:04.449 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:04.449 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:04.449 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:04.449 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:04.449 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:04.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:27:04.449 00:27:04.449 --- 10.0.0.2 ping statistics --- 00:27:04.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.450 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:27:04.450 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:27:04.450 00:27:04.450 --- 10.0.0.1 ping statistics --- 00:27:04.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.450 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:27:04.450 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.450 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:04.450 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:04.450 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.450 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:04.450 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:04.450 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.450 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:04.450 07:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:04.450 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:04.450 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:04.450 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:04.450 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.450 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3658920 00:27:04.450 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3658920 00:27:04.450 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:04.450 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3658920 ']' 00:27:04.450 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.450 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:04.450 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.450 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:04.450 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.450 [2024-11-20 07:25:26.109132] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:27:04.450 [2024-11-20 07:25:26.109210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.450 [2024-11-20 07:25:26.208109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.450 [2024-11-20 07:25:26.258923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.450 [2024-11-20 07:25:26.258978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.450 [2024-11-20 07:25:26.258986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.450 [2024-11-20 07:25:26.258994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.450 [2024-11-20 07:25:26.259000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.450 [2024-11-20 07:25:26.259806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.712 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:04.712 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:27:04.712 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:04.712 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:04.712 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.712 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.712 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:04.712 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.712 07:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.712 [2024-11-20 07:25:26.965459] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.712 [2024-11-20 07:25:26.973725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:04.712 null0 00:27:04.974 [2024-11-20 07:25:27.005677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.974 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.974 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3658963 00:27:04.974 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3658963 /tmp/host.sock 00:27:04.974 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:04.974 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3658963 ']' 00:27:04.974 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:27:04.974 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:04.974 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:04.974 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:04.974 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:04.974 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.974 [2024-11-20 07:25:27.080911] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:27:04.974 [2024-11-20 07:25:27.080974] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658963 ] 00:27:04.974 [2024-11-20 07:25:27.173623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.974 [2024-11-20 07:25:27.229271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.916 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:05.916 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:27:05.916 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:05.916 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:05.916 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.916 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.916 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.916 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:05.916 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.916 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.916 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.916 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:05.916 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.916 07:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.859 [2024-11-20 07:25:29.014455] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:06.859 [2024-11-20 07:25:29.014485] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:06.859 [2024-11-20 07:25:29.014499] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:07.120 [2024-11-20 07:25:29.143908] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:07.120 [2024-11-20 07:25:29.327396] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:07.120 [2024-11-20 07:25:29.328728] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2108410:1 started. 00:27:07.120 [2024-11-20 07:25:29.330593] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:07.120 [2024-11-20 07:25:29.330656] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:07.120 [2024-11-20 07:25:29.330681] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:07.120 [2024-11-20 07:25:29.330700] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:07.120 [2024-11-20 07:25:29.330726] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:07.120 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.120 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:07.120 [2024-11-20 07:25:29.334094] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2108410 was disconnected and freed. delete nvme_qpair. 00:27:07.120 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.120 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.120 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.120 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.120 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.120 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.120 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.120 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.120 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:07.120 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:07.120 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:07.380 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:07.380 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.380 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.380 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.380 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.380 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.380 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.380 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.380 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.380 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:07.380 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:08.320 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.320 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.320 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.320 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.320 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.320 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.320 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.320 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.581 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:08.581 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:09.523 07:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:09.523 07:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.523 07:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:09.523 07:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.523 07:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:09.523 07:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:09.523 07:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:09.523 07:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.523 07:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:09.523 07:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:10.465 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:10.465 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.465 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:10.465 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.465 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:10.465 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.465 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:10.465 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.465 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:10.465 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:11.850 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:11.850 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.850 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:11.850 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.850 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:11.850 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:11.850 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:11.850 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.850 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:11.850 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:12.792 [2024-11-20 07:25:34.770599] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:12.792 [2024-11-20 07:25:34.770637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.792 [2024-11-20 07:25:34.770647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.792 [2024-11-20 07:25:34.770654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.792 [2024-11-20 07:25:34.770659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.792 [2024-11-20 07:25:34.770665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.792 [2024-11-20 07:25:34.770670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.792 [2024-11-20 07:25:34.770676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.792 [2024-11-20 07:25:34.770681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.792 [2024-11-20 07:25:34.770687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.792 [2024-11-20 07:25:34.770692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.792 [2024-11-20 07:25:34.770697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4c00 is same with the state(6) to be set 00:27:12.792 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:12.792 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.792 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:12.792 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.792 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:12.792 [2024-11-20 07:25:34.780622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e4c00 (9): Bad file descriptor 00:27:12.792 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:12.792 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:12.792 [2024-11-20 07:25:34.790656] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:12.792 [2024-11-20 07:25:34.790666] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:12.792 [2024-11-20 07:25:34.790670] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:12.792 [2024-11-20 07:25:34.790674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:12.792 [2024-11-20 07:25:34.790690] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:12.792 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.792 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:12.792 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:13.734 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:13.734 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.734 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:13.734 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.734 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:13.734 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:13.734 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:13.734 [2024-11-20 07:25:35.839189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:13.734 [2024-11-20 07:25:35.839268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e4c00 with addr=10.0.0.2, port=4420 00:27:13.734 [2024-11-20 07:25:35.839299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4c00 is same with the state(6) to be set 00:27:13.734 [2024-11-20 07:25:35.839354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e4c00 (9): Bad file descriptor 00:27:13.734 [2024-11-20 07:25:35.840464] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:13.734 [2024-11-20 07:25:35.840540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:13.734 [2024-11-20 07:25:35.840564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:13.734 [2024-11-20 07:25:35.840588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:13.734 [2024-11-20 07:25:35.840608] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:13.734 [2024-11-20 07:25:35.840624] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:13.735 [2024-11-20 07:25:35.840637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:13.735 [2024-11-20 07:25:35.840659] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:13.735 [2024-11-20 07:25:35.840674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:13.735 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.735 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:13.735 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:14.677 [2024-11-20 07:25:36.843097] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:14.677 [2024-11-20 07:25:36.843113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:14.677 [2024-11-20 07:25:36.843123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:14.677 [2024-11-20 07:25:36.843129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:14.677 [2024-11-20 07:25:36.843135] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:14.677 [2024-11-20 07:25:36.843140] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:14.677 [2024-11-20 07:25:36.843144] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:14.677 [2024-11-20 07:25:36.843147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:14.677 [2024-11-20 07:25:36.843171] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:14.677 [2024-11-20 07:25:36.843188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.677 [2024-11-20 07:25:36.843195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.677 [2024-11-20 07:25:36.843203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.677 [2024-11-20 07:25:36.843208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.677 [2024-11-20 07:25:36.843214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.677 [2024-11-20 07:25:36.843219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.677 [2024-11-20 07:25:36.843224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.677 [2024-11-20 07:25:36.843229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.677 [2024-11-20 07:25:36.843235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.677 [2024-11-20 07:25:36.843241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.677 [2024-11-20 07:25:36.843246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:14.677 [2024-11-20 07:25:36.843601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d4340 (9): Bad file descriptor 00:27:14.677 [2024-11-20 07:25:36.844611] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:14.677 [2024-11-20 07:25:36.844620] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:14.677 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:14.677 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.678 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:14.678 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.678 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:14.678 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.678 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:14.678 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.678 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:14.678 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.678 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.938 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:14.938 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:14.938 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.938 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:14.938 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.938 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:14.938 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.938 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:14.938 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.938 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:14.938 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:15.879 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:15.879 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.879 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:15.879 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.879 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:15.879 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.879 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:15.879 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.879 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:15.879 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:16.820 [2024-11-20 07:25:38.900314] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:16.820 [2024-11-20 07:25:38.900329] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:16.820 [2024-11-20 07:25:38.900339] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:16.820 [2024-11-20 07:25:38.987585] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:16.820 [2024-11-20 07:25:39.089369] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:16.820 [2024-11-20 07:25:39.090001] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x20d9260:1 started. 00:27:16.820 [2024-11-20 07:25:39.090912] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:16.820 [2024-11-20 07:25:39.090937] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:16.820 [2024-11-20 07:25:39.090953] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:16.820 [2024-11-20 07:25:39.090963] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:16.820 [2024-11-20 07:25:39.090969] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:17.081 [2024-11-20 07:25:39.097472] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x20d9260 was disconnected and freed. delete nvme_qpair. 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3658963 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3658963 ']' 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3658963 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3658963 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3658963' 00:27:17.081 killing process with pid 3658963 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3658963 00:27:17.081 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3658963 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:17.342 rmmod nvme_tcp 00:27:17.342 rmmod nvme_fabrics 00:27:17.342 rmmod nvme_keyring 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3658920 ']' 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3658920 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3658920 ']' 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3658920 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3658920 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3658920' 00:27:17.342 killing process with pid 3658920 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3658920 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3658920 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.342 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:19.888 00:27:19.888 real 0m23.381s 00:27:19.888 user 0m27.465s 00:27:19.888 sys 0m7.100s 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:19.888 ************************************ 00:27:19.888 END TEST nvmf_discovery_remove_ifc 00:27:19.888 ************************************ 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.888 ************************************ 00:27:19.888 START TEST nvmf_identify_kernel_target 00:27:19.888 ************************************ 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:19.888 * Looking for test storage... 00:27:19.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:19.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.888 --rc genhtml_branch_coverage=1 00:27:19.888 --rc genhtml_function_coverage=1 00:27:19.888 --rc genhtml_legend=1 00:27:19.888 --rc geninfo_all_blocks=1 00:27:19.888 --rc geninfo_unexecuted_blocks=1 00:27:19.888 00:27:19.888 ' 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:19.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.888 --rc genhtml_branch_coverage=1 00:27:19.888 --rc genhtml_function_coverage=1 00:27:19.888 --rc genhtml_legend=1 00:27:19.888 --rc geninfo_all_blocks=1 00:27:19.888 --rc geninfo_unexecuted_blocks=1 00:27:19.888 00:27:19.888 ' 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:19.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.888 --rc genhtml_branch_coverage=1 00:27:19.888 --rc genhtml_function_coverage=1 00:27:19.888 --rc genhtml_legend=1 00:27:19.888 --rc geninfo_all_blocks=1 00:27:19.888 --rc geninfo_unexecuted_blocks=1 00:27:19.888 00:27:19.888 ' 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:19.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.888 --rc genhtml_branch_coverage=1 00:27:19.888 --rc genhtml_function_coverage=1 00:27:19.888 --rc genhtml_legend=1 00:27:19.888 --rc geninfo_all_blocks=1 00:27:19.888 --rc geninfo_unexecuted_blocks=1 00:27:19.888 00:27:19.888 ' 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.888 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:19.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:19.889 07:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:19.889 07:25:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:28.031 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.031 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:28.032 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:28.032 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:28.032 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:28.032 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:28.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:27:28.032 00:27:28.032 --- 10.0.0.2 ping statistics --- 00:27:28.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.032 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:27:28.032 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:27:28.032 00:27:28.032 --- 10.0.0.1 ping statistics --- 00:27:28.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.033 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:28.033 07:25:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:31.336 Waiting for block devices as requested 00:27:31.336 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:31.336 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:31.336 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:31.336 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:31.336 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:31.336 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:31.336 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:31.336 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:31.597 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:31.857 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:31.857 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:31.857 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:32.119 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:32.119 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:32.119 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:32.119 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:32.379 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:32.641 No valid GPT data, bailing 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:32.641 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:32.903 00:27:32.903 Discovery Log Number of Records 2, Generation counter 2 00:27:32.903 =====Discovery Log Entry 0====== 00:27:32.903 trtype: tcp 00:27:32.903 adrfam: ipv4 00:27:32.903 subtype: current discovery subsystem 00:27:32.903 treq: not specified, sq flow control disable supported 00:27:32.903 portid: 1 00:27:32.903 trsvcid: 4420 00:27:32.903 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:32.903 traddr: 10.0.0.1 00:27:32.903 eflags: none 00:27:32.903 sectype: none 00:27:32.903 =====Discovery Log Entry 1====== 00:27:32.903 trtype: tcp 00:27:32.903 adrfam: ipv4 00:27:32.903 subtype: nvme subsystem 00:27:32.903 treq: not specified, sq flow control disable supported 00:27:32.903 portid: 1 00:27:32.903 trsvcid: 4420 00:27:32.903 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:32.903 traddr: 10.0.0.1 00:27:32.903 eflags: none 00:27:32.903 sectype: none 00:27:32.903 07:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:32.903 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:32.903 ===================================================== 00:27:32.903 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:32.903 ===================================================== 00:27:32.903 Controller Capabilities/Features 00:27:32.903 ================================ 00:27:32.903 Vendor ID: 0000 00:27:32.903 Subsystem Vendor ID: 0000 00:27:32.903 Serial Number: a9fcdfbb555892011018 00:27:32.903 Model Number: Linux 00:27:32.903 Firmware Version: 6.8.9-20 00:27:32.903 Recommended Arb Burst: 0 00:27:32.903 IEEE OUI Identifier: 00 00 00 00:27:32.903 Multi-path I/O 00:27:32.903 May have multiple subsystem ports: No 00:27:32.903 May have multiple controllers: No 00:27:32.903 Associated with SR-IOV VF: No 00:27:32.903 Max Data Transfer Size: Unlimited 00:27:32.903 Max Number of Namespaces: 0 00:27:32.903 Max Number of I/O Queues: 1024 00:27:32.903 NVMe Specification Version (VS): 1.3 00:27:32.903 NVMe Specification Version (Identify): 1.3 00:27:32.903 Maximum Queue Entries: 1024 00:27:32.903 Contiguous Queues Required: No 00:27:32.903 Arbitration Mechanisms Supported 00:27:32.903 Weighted Round Robin: Not Supported 00:27:32.903 Vendor Specific: Not Supported 00:27:32.903 Reset Timeout: 7500 ms 00:27:32.903 Doorbell Stride: 4 bytes 00:27:32.903 NVM Subsystem Reset: Not Supported 00:27:32.903 Command Sets Supported 00:27:32.903 NVM Command Set: Supported 00:27:32.903 Boot Partition: Not Supported 00:27:32.903 Memory Page Size Minimum: 4096 bytes 00:27:32.903 Memory Page Size Maximum: 4096 bytes 00:27:32.903 Persistent Memory Region: Not Supported 00:27:32.903 Optional Asynchronous Events Supported 00:27:32.903 Namespace Attribute Notices: Not Supported 00:27:32.903 Firmware Activation Notices: Not Supported 00:27:32.903 ANA Change Notices: Not Supported 00:27:32.903 PLE Aggregate Log Change Notices: Not Supported 00:27:32.903 LBA Status Info Alert Notices: Not Supported 00:27:32.903 EGE Aggregate Log Change Notices: Not Supported 00:27:32.903 Normal NVM Subsystem Shutdown event: Not Supported 00:27:32.903 Zone Descriptor Change Notices: Not Supported 00:27:32.903 Discovery Log Change Notices: Supported 00:27:32.903 Controller Attributes 00:27:32.903 128-bit Host Identifier: Not Supported 00:27:32.903 Non-Operational Permissive Mode: Not Supported 00:27:32.903 NVM Sets: Not Supported 00:27:32.903 Read Recovery Levels: Not Supported 00:27:32.903 Endurance Groups: Not Supported 00:27:32.903 Predictable Latency Mode: Not Supported 00:27:32.903 Traffic Based Keep ALive: Not Supported 00:27:32.903 Namespace Granularity: Not Supported 00:27:32.903 SQ Associations: Not Supported 00:27:32.903 UUID List: Not Supported 00:27:32.903 Multi-Domain Subsystem: Not Supported 00:27:32.903 Fixed Capacity Management: Not Supported 00:27:32.903 Variable Capacity Management: Not Supported 00:27:32.903 Delete Endurance Group: Not Supported 00:27:32.903 Delete NVM Set: Not Supported 00:27:32.903 Extended LBA Formats Supported: Not Supported 00:27:32.903 Flexible Data Placement Supported: Not Supported 00:27:32.903 00:27:32.903 Controller Memory Buffer Support 00:27:32.903 ================================ 00:27:32.903 Supported: No 00:27:32.903 00:27:32.903 Persistent Memory Region Support 00:27:32.903 ================================ 00:27:32.903 Supported: No 00:27:32.903 00:27:32.903 Admin Command Set Attributes 00:27:32.903 ============================ 00:27:32.903 Security Send/Receive: Not Supported 00:27:32.903 Format NVM: Not Supported 00:27:32.903 Firmware Activate/Download: Not Supported 00:27:32.903 Namespace Management: Not Supported 00:27:32.903 Device Self-Test: Not Supported 00:27:32.903 Directives: Not Supported 00:27:32.903 NVMe-MI: Not Supported 00:27:32.903 Virtualization Management: Not Supported 00:27:32.903 Doorbell Buffer Config: Not Supported 00:27:32.903 Get LBA Status Capability: Not Supported 00:27:32.903 Command & Feature Lockdown Capability: Not Supported 00:27:32.903 Abort Command Limit: 1 00:27:32.903 Async Event Request Limit: 1 00:27:32.903 Number of Firmware Slots: N/A 00:27:32.903 Firmware Slot 1 Read-Only: N/A 00:27:32.903 Firmware Activation Without Reset: N/A 00:27:32.903 Multiple Update Detection Support: N/A 00:27:32.903 Firmware Update Granularity: No Information Provided 00:27:32.903 Per-Namespace SMART Log: No 00:27:32.903 Asymmetric Namespace Access Log Page: Not Supported 00:27:32.903 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:32.903 Command Effects Log Page: Not Supported 00:27:32.903 Get Log Page Extended Data: Supported 00:27:32.903 Telemetry Log Pages: Not Supported 00:27:32.903 Persistent Event Log Pages: Not Supported 00:27:32.903 Supported Log Pages Log Page: May Support 00:27:32.903 Commands Supported & Effects Log Page: Not Supported 00:27:32.903 Feature Identifiers & Effects Log Page:May Support 00:27:32.903 NVMe-MI Commands & Effects Log Page: May Support 00:27:32.903 Data Area 4 for Telemetry Log: Not Supported 00:27:32.903 Error Log Page Entries Supported: 1 00:27:32.903 Keep Alive: Not Supported 00:27:32.903 00:27:32.903 NVM Command Set Attributes 00:27:32.903 ========================== 00:27:32.903 Submission Queue Entry Size 00:27:32.903 Max: 1 00:27:32.903 Min: 1 00:27:32.903 Completion Queue Entry Size 00:27:32.903 Max: 1 00:27:32.903 Min: 1 00:27:32.903 Number of Namespaces: 0 00:27:32.903 Compare Command: Not Supported 00:27:32.903 Write Uncorrectable Command: Not Supported 00:27:32.903 Dataset Management Command: Not Supported 00:27:32.903 Write Zeroes Command: Not Supported 00:27:32.903 Set Features Save Field: Not Supported 00:27:32.903 Reservations: Not Supported 00:27:32.903 Timestamp: Not Supported 00:27:32.903 Copy: Not Supported 00:27:32.903 Volatile Write Cache: Not Present 00:27:32.903 Atomic Write Unit (Normal): 1 00:27:32.903 Atomic Write Unit (PFail): 1 00:27:32.904 Atomic Compare & Write Unit: 1 00:27:32.904 Fused Compare & Write: Not Supported 00:27:32.904 Scatter-Gather List 00:27:32.904 SGL Command Set: Supported 00:27:32.904 SGL Keyed: Not Supported 00:27:32.904 SGL Bit Bucket Descriptor: Not Supported 00:27:32.904 SGL Metadata Pointer: Not Supported 00:27:32.904 Oversized SGL: Not Supported 00:27:32.904 SGL Metadata Address: Not Supported 00:27:32.904 SGL Offset: Supported 00:27:32.904 Transport SGL Data Block: Not Supported 00:27:32.904 Replay Protected Memory Block: Not Supported 00:27:32.904 00:27:32.904 Firmware Slot Information 00:27:32.904 ========================= 00:27:32.904 Active slot: 0 00:27:32.904 00:27:32.904 00:27:32.904 Error Log 00:27:32.904 ========= 00:27:32.904 00:27:32.904 Active Namespaces 00:27:32.904 ================= 00:27:32.904 Discovery Log Page 00:27:32.904 ================== 00:27:32.904 Generation Counter: 2 00:27:32.904 Number of Records: 2 00:27:32.904 Record Format: 0 00:27:32.904 00:27:32.904 Discovery Log Entry 0 00:27:32.904 ---------------------- 00:27:32.904 Transport Type: 3 (TCP) 00:27:32.904 Address Family: 1 (IPv4) 00:27:32.904 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:32.904 Entry Flags: 00:27:32.904 Duplicate Returned Information: 0 00:27:32.904 Explicit Persistent Connection Support for Discovery: 0 00:27:32.904 Transport Requirements: 00:27:32.904 Secure Channel: Not Specified 00:27:32.904 Port ID: 1 (0x0001) 00:27:32.904 Controller ID: 65535 (0xffff) 00:27:32.904 Admin Max SQ Size: 32 00:27:32.904 Transport Service Identifier: 4420 00:27:32.904 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:32.904 Transport Address: 10.0.0.1 00:27:32.904 Discovery Log Entry 1 00:27:32.904 ---------------------- 00:27:32.904 Transport Type: 3 (TCP) 00:27:32.904 Address Family: 1 (IPv4) 00:27:32.904 Subsystem Type: 2 (NVM Subsystem) 00:27:32.904 Entry Flags: 00:27:32.904 Duplicate Returned Information: 0 00:27:32.904 Explicit Persistent Connection Support for Discovery: 0 00:27:32.904 Transport Requirements: 00:27:32.904 Secure Channel: Not Specified 00:27:32.904 Port ID: 1 (0x0001) 00:27:32.904 Controller ID: 65535 (0xffff) 00:27:32.904 Admin Max SQ Size: 32 00:27:32.904 Transport Service Identifier: 4420 00:27:32.904 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:32.904 Transport Address: 10.0.0.1 00:27:32.904 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:33.166 get_feature(0x01) failed 00:27:33.166 get_feature(0x02) failed 00:27:33.166 get_feature(0x04) failed 00:27:33.166 ===================================================== 00:27:33.166 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:33.166 ===================================================== 00:27:33.166 Controller Capabilities/Features 00:27:33.166 ================================ 00:27:33.166 Vendor ID: 0000 00:27:33.166 Subsystem Vendor ID: 0000 00:27:33.166 Serial Number: 2da832072911fcfa8b4e 00:27:33.166 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:33.166 Firmware Version: 6.8.9-20 00:27:33.166 Recommended Arb Burst: 6 00:27:33.166 IEEE OUI Identifier: 00 00 00 00:27:33.166 Multi-path I/O 00:27:33.166 May have multiple subsystem ports: Yes 00:27:33.166 May have multiple controllers: Yes 00:27:33.166 Associated with SR-IOV VF: No 00:27:33.166 Max Data Transfer Size: Unlimited 00:27:33.166 Max Number of Namespaces: 1024 00:27:33.166 Max Number of I/O Queues: 128 00:27:33.166 NVMe Specification Version (VS): 1.3 00:27:33.166 NVMe Specification Version (Identify): 1.3 00:27:33.166 Maximum Queue Entries: 1024 00:27:33.166 Contiguous Queues Required: No 00:27:33.166 Arbitration Mechanisms Supported 00:27:33.166 Weighted Round Robin: Not Supported 00:27:33.166 Vendor Specific: Not Supported 00:27:33.166 Reset Timeout: 7500 ms 00:27:33.166 Doorbell Stride: 4 bytes 00:27:33.166 NVM Subsystem Reset: Not Supported 00:27:33.166 Command Sets Supported 00:27:33.166 NVM Command Set: Supported 00:27:33.166 Boot Partition: Not Supported 00:27:33.166 Memory Page Size Minimum: 4096 bytes 00:27:33.166 Memory Page Size Maximum: 4096 bytes 00:27:33.167 Persistent Memory Region: Not Supported 00:27:33.167 Optional Asynchronous Events Supported 00:27:33.167 Namespace Attribute Notices: Supported 00:27:33.167 Firmware Activation Notices: Not Supported 00:27:33.167 ANA Change Notices: Supported 00:27:33.167 PLE Aggregate Log Change Notices: Not Supported 00:27:33.167 LBA Status Info Alert Notices: Not Supported 00:27:33.167 EGE Aggregate Log Change Notices: Not Supported 00:27:33.167 Normal NVM Subsystem Shutdown event: Not Supported 00:27:33.167 Zone Descriptor Change Notices: Not Supported 00:27:33.167 Discovery Log Change Notices: Not Supported 00:27:33.167 Controller Attributes 00:27:33.167 128-bit Host Identifier: Supported 00:27:33.167 Non-Operational Permissive Mode: Not Supported 00:27:33.167 NVM Sets: Not Supported 00:27:33.167 Read Recovery Levels: Not Supported 00:27:33.167 Endurance Groups: Not Supported 00:27:33.167 Predictable Latency Mode: Not Supported 00:27:33.167 Traffic Based Keep ALive: Supported 00:27:33.167 Namespace Granularity: Not Supported 00:27:33.167 SQ Associations: Not Supported 00:27:33.167 UUID List: Not Supported 00:27:33.167 Multi-Domain Subsystem: Not Supported 00:27:33.167 Fixed Capacity Management: Not Supported 00:27:33.167 Variable Capacity Management: Not Supported 00:27:33.167 Delete Endurance Group: Not Supported 00:27:33.167 Delete NVM Set: Not Supported 00:27:33.167 Extended LBA Formats Supported: Not Supported 00:27:33.167 Flexible Data Placement Supported: Not Supported 00:27:33.167 00:27:33.167 Controller Memory Buffer Support 00:27:33.167 ================================ 00:27:33.167 Supported: No 00:27:33.167 00:27:33.167 Persistent Memory Region Support 00:27:33.167 ================================ 00:27:33.167 Supported: No 00:27:33.167 00:27:33.167 Admin Command Set Attributes 00:27:33.167 ============================ 00:27:33.167 Security Send/Receive: Not Supported 00:27:33.167 Format NVM: Not Supported 00:27:33.167 Firmware Activate/Download: Not Supported 00:27:33.167 Namespace Management: Not Supported 00:27:33.167 Device Self-Test: Not Supported 00:27:33.167 Directives: Not Supported 00:27:33.167 NVMe-MI: Not Supported 00:27:33.167 Virtualization Management: Not Supported 00:27:33.167 Doorbell Buffer Config: Not Supported 00:27:33.167 Get LBA Status Capability: Not Supported 00:27:33.167 Command & Feature Lockdown Capability: Not Supported 00:27:33.167 Abort Command Limit: 4 00:27:33.167 Async Event Request Limit: 4 00:27:33.167 Number of Firmware Slots: N/A 00:27:33.167 Firmware Slot 1 Read-Only: N/A 00:27:33.167 Firmware Activation Without Reset: N/A 00:27:33.167 Multiple Update Detection Support: N/A 00:27:33.167 Firmware Update Granularity: No Information Provided 00:27:33.167 Per-Namespace SMART Log: Yes 00:27:33.167 Asymmetric Namespace Access Log Page: Supported 00:27:33.167 ANA Transition Time : 10 sec 00:27:33.167 00:27:33.167 Asymmetric Namespace Access Capabilities 00:27:33.167 ANA Optimized State : Supported 00:27:33.167 ANA Non-Optimized State : Supported 00:27:33.167 ANA Inaccessible State : Supported 00:27:33.167 ANA Persistent Loss State : Supported 00:27:33.167 ANA Change State : Supported 00:27:33.167 ANAGRPID is not changed : No 00:27:33.167 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:33.167 00:27:33.167 ANA Group Identifier Maximum : 128 00:27:33.167 Number of ANA Group Identifiers : 128 00:27:33.167 Max Number of Allowed Namespaces : 1024 00:27:33.167 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:33.167 Command Effects Log Page: Supported 00:27:33.167 Get Log Page Extended Data: Supported 00:27:33.167 Telemetry Log Pages: Not Supported 00:27:33.167 Persistent Event Log Pages: Not Supported 00:27:33.167 Supported Log Pages Log Page: May Support 00:27:33.167 Commands Supported & Effects Log Page: Not Supported 00:27:33.167 Feature Identifiers & Effects Log Page:May Support 00:27:33.167 NVMe-MI Commands & Effects Log Page: May Support 00:27:33.167 Data Area 4 for Telemetry Log: Not Supported 00:27:33.167 Error Log Page Entries Supported: 128 00:27:33.167 Keep Alive: Supported 00:27:33.167 Keep Alive Granularity: 1000 ms 00:27:33.167 00:27:33.167 NVM Command Set Attributes 00:27:33.167 ========================== 00:27:33.167 Submission Queue Entry Size 00:27:33.167 Max: 64 00:27:33.167 Min: 64 00:27:33.167 Completion Queue Entry Size 00:27:33.167 Max: 16 00:27:33.167 Min: 16 00:27:33.167 Number of Namespaces: 1024 00:27:33.167 Compare Command: Not Supported 00:27:33.167 Write Uncorrectable Command: Not Supported 00:27:33.167 Dataset Management Command: Supported 00:27:33.167 Write Zeroes Command: Supported 00:27:33.167 Set Features Save Field: Not Supported 00:27:33.167 Reservations: Not Supported 00:27:33.167 Timestamp: Not Supported 00:27:33.167 Copy: Not Supported 00:27:33.167 Volatile Write Cache: Present 00:27:33.167 Atomic Write Unit (Normal): 1 00:27:33.167 Atomic Write Unit (PFail): 1 00:27:33.167 Atomic Compare & Write Unit: 1 00:27:33.167 Fused Compare & Write: Not Supported 00:27:33.167 Scatter-Gather List 00:27:33.167 SGL Command Set: Supported 00:27:33.167 SGL Keyed: Not Supported 00:27:33.167 SGL Bit Bucket Descriptor: Not Supported 00:27:33.167 SGL Metadata Pointer: Not Supported 00:27:33.167 Oversized SGL: Not Supported 00:27:33.167 SGL Metadata Address: Not Supported 00:27:33.167 SGL Offset: Supported 00:27:33.167 Transport SGL Data Block: Not Supported 00:27:33.167 Replay Protected Memory Block: Not Supported 00:27:33.167 00:27:33.167 Firmware Slot Information 00:27:33.167 ========================= 00:27:33.167 Active slot: 0 00:27:33.167 00:27:33.167 Asymmetric Namespace Access 00:27:33.167 =========================== 00:27:33.167 Change Count : 0 00:27:33.167 Number of ANA Group Descriptors : 1 00:27:33.167 ANA Group Descriptor : 0 00:27:33.167 ANA Group ID : 1 00:27:33.167 Number of NSID Values : 1 00:27:33.167 Change Count : 0 00:27:33.167 ANA State : 1 00:27:33.167 Namespace Identifier : 1 00:27:33.167 00:27:33.167 Commands Supported and Effects 00:27:33.167 ============================== 00:27:33.167 Admin Commands 00:27:33.167 -------------- 00:27:33.167 Get Log Page (02h): Supported 00:27:33.167 Identify (06h): Supported 00:27:33.167 Abort (08h): Supported 00:27:33.167 Set Features (09h): Supported 00:27:33.167 Get Features (0Ah): Supported 00:27:33.167 Asynchronous Event Request (0Ch): Supported 00:27:33.167 Keep Alive (18h): Supported 00:27:33.167 I/O Commands 00:27:33.167 ------------ 00:27:33.167 Flush (00h): Supported 00:27:33.167 Write (01h): Supported LBA-Change 00:27:33.167 Read (02h): Supported 00:27:33.167 Write Zeroes (08h): Supported LBA-Change 00:27:33.167 Dataset Management (09h): Supported 00:27:33.167 00:27:33.167 Error Log 00:27:33.167 ========= 00:27:33.167 Entry: 0 00:27:33.167 Error Count: 0x3 00:27:33.167 Submission Queue Id: 0x0 00:27:33.167 Command Id: 0x5 00:27:33.167 Phase Bit: 0 00:27:33.167 Status Code: 0x2 00:27:33.167 Status Code Type: 0x0 00:27:33.167 Do Not Retry: 1 00:27:33.167 Error Location: 0x28 00:27:33.167 LBA: 0x0 00:27:33.167 Namespace: 0x0 00:27:33.167 Vendor Log Page: 0x0 00:27:33.167 ----------- 00:27:33.167 Entry: 1 00:27:33.167 Error Count: 0x2 00:27:33.167 Submission Queue Id: 0x0 00:27:33.167 Command Id: 0x5 00:27:33.167 Phase Bit: 0 00:27:33.167 Status Code: 0x2 00:27:33.167 Status Code Type: 0x0 00:27:33.167 Do Not Retry: 1 00:27:33.167 Error Location: 0x28 00:27:33.167 LBA: 0x0 00:27:33.167 Namespace: 0x0 00:27:33.167 Vendor Log Page: 0x0 00:27:33.167 ----------- 00:27:33.167 Entry: 2 00:27:33.167 Error Count: 0x1 00:27:33.167 Submission Queue Id: 0x0 00:27:33.167 Command Id: 0x4 00:27:33.167 Phase Bit: 0 00:27:33.167 Status Code: 0x2 00:27:33.167 Status Code Type: 0x0 00:27:33.167 Do Not Retry: 1 00:27:33.167 Error Location: 0x28 00:27:33.167 LBA: 0x0 00:27:33.167 Namespace: 0x0 00:27:33.167 Vendor Log Page: 0x0 00:27:33.167 00:27:33.167 Number of Queues 00:27:33.167 ================ 00:27:33.167 Number of I/O Submission Queues: 128 00:27:33.167 Number of I/O Completion Queues: 128 00:27:33.167 00:27:33.167 ZNS Specific Controller Data 00:27:33.167 ============================ 00:27:33.167 Zone Append Size Limit: 0 00:27:33.167 00:27:33.167 00:27:33.167 Active Namespaces 00:27:33.167 ================= 00:27:33.167 get_feature(0x05) failed 00:27:33.167 Namespace ID:1 00:27:33.167 Command Set Identifier: NVM (00h) 00:27:33.167 Deallocate: Supported 00:27:33.168 Deallocated/Unwritten Error: Not Supported 00:27:33.168 Deallocated Read Value: Unknown 00:27:33.168 Deallocate in Write Zeroes: Not Supported 00:27:33.168 Deallocated Guard Field: 0xFFFF 00:27:33.168 Flush: Supported 00:27:33.168 Reservation: Not Supported 00:27:33.168 Namespace Sharing Capabilities: Multiple Controllers 00:27:33.168 Size (in LBAs): 3750748848 (1788GiB) 00:27:33.168 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:33.168 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:33.168 UUID: 713b211b-89dd-4ad3-906e-060a0b486f90 00:27:33.168 Thin Provisioning: Not Supported 00:27:33.168 Per-NS Atomic Units: Yes 00:27:33.168 Atomic Write Unit (Normal): 8 00:27:33.168 Atomic Write Unit (PFail): 8 00:27:33.168 Preferred Write Granularity: 8 00:27:33.168 Atomic Compare & Write Unit: 8 00:27:33.168 Atomic Boundary Size (Normal): 0 00:27:33.168 Atomic Boundary Size (PFail): 0 00:27:33.168 Atomic Boundary Offset: 0 00:27:33.168 NGUID/EUI64 Never Reused: No 00:27:33.168 ANA group ID: 1 00:27:33.168 Namespace Write Protected: No 00:27:33.168 Number of LBA Formats: 1 00:27:33.168 Current LBA Format: LBA Format #00 00:27:33.168 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:33.168 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:33.168 rmmod nvme_tcp 00:27:33.168 rmmod nvme_fabrics 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.168 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.082 07:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:35.082 07:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:35.082 07:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:35.082 07:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:35.082 07:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:35.343 07:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:35.343 07:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:35.343 07:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:35.343 07:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:35.343 07:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:35.343 07:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:38.645 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:38.645 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:38.645 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:38.645 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:38.906 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:38.906 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:38.906 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:38.906 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:38.906 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:38.906 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:38.906 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:38.906 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:38.906 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:38.906 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:38.906 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:38.906 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:38.906 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:39.478 00:27:39.478 real 0m19.694s 00:27:39.478 user 0m5.304s 00:27:39.478 sys 0m11.426s 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:39.478 ************************************ 00:27:39.478 END TEST nvmf_identify_kernel_target 00:27:39.478 ************************************ 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.478 ************************************ 00:27:39.478 START TEST nvmf_auth_host 00:27:39.478 ************************************ 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:39.478 * Looking for test storage... 00:27:39.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.478 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:39.479 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.740 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.740 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.740 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:39.740 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.740 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:39.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.740 --rc genhtml_branch_coverage=1 00:27:39.740 --rc genhtml_function_coverage=1 00:27:39.740 --rc genhtml_legend=1 00:27:39.740 --rc geninfo_all_blocks=1 00:27:39.740 --rc geninfo_unexecuted_blocks=1 00:27:39.740 00:27:39.740 ' 00:27:39.740 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:39.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.740 --rc genhtml_branch_coverage=1 00:27:39.740 --rc genhtml_function_coverage=1 00:27:39.740 --rc genhtml_legend=1 00:27:39.740 --rc geninfo_all_blocks=1 00:27:39.740 --rc geninfo_unexecuted_blocks=1 00:27:39.740 00:27:39.740 ' 00:27:39.740 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:39.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.740 --rc genhtml_branch_coverage=1 00:27:39.740 --rc genhtml_function_coverage=1 00:27:39.740 --rc genhtml_legend=1 00:27:39.740 --rc geninfo_all_blocks=1 00:27:39.740 --rc geninfo_unexecuted_blocks=1 00:27:39.740 00:27:39.740 ' 00:27:39.740 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:39.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.740 --rc genhtml_branch_coverage=1 00:27:39.740 --rc genhtml_function_coverage=1 00:27:39.740 --rc genhtml_legend=1 00:27:39.740 --rc geninfo_all_blocks=1 00:27:39.740 --rc geninfo_unexecuted_blocks=1 00:27:39.740 00:27:39.740 ' 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:39.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:39.741 07:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:47.886 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:47.886 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.886 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:47.887 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:47.887 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.887 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:47.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:27:47.887 00:27:47.887 --- 10.0.0.2 ping statistics --- 00:27:47.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.887 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:27:47.887 00:27:47.887 --- 10.0.0.1 ping statistics --- 00:27:47.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.887 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3673456 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3673456 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3673456 ']' 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:47.887 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2a2f9d99fedd76ee3830afa4b30e1e3b 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4Fe 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2a2f9d99fedd76ee3830afa4b30e1e3b 0 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2a2f9d99fedd76ee3830afa4b30e1e3b 0 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2a2f9d99fedd76ee3830afa4b30e1e3b 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4Fe 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4Fe 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.4Fe 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=12bc10dca2ba6bb35d6c922d22f407c77afd1912d47397b20d5aa788a7316415 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.771 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 12bc10dca2ba6bb35d6c922d22f407c77afd1912d47397b20d5aa788a7316415 3 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 12bc10dca2ba6bb35d6c922d22f407c77afd1912d47397b20d5aa788a7316415 3 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=12bc10dca2ba6bb35d6c922d22f407c77afd1912d47397b20d5aa788a7316415 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.771 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.771 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.771 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:48.149 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=68f6eedeb8c1d77e0b2398f9994583057ff408d5fcd65df2 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mHS 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 68f6eedeb8c1d77e0b2398f9994583057ff408d5fcd65df2 0 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 68f6eedeb8c1d77e0b2398f9994583057ff408d5fcd65df2 0 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=68f6eedeb8c1d77e0b2398f9994583057ff408d5fcd65df2 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mHS 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mHS 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.mHS 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:48.150 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d15f9723b033a4b091a26446b5604b225d9371894e8a19c2 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.P8x 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d15f9723b033a4b091a26446b5604b225d9371894e8a19c2 2 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d15f9723b033a4b091a26446b5604b225d9371894e8a19c2 2 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d15f9723b033a4b091a26446b5604b225d9371894e8a19c2 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.P8x 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.P8x 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.P8x 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e7068cb9bec3ac7a6f1bba3f38d5c80c 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.gva 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e7068cb9bec3ac7a6f1bba3f38d5c80c 1 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e7068cb9bec3ac7a6f1bba3f38d5c80c 1 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e7068cb9bec3ac7a6f1bba3f38d5c80c 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.gva 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.gva 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.gva 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d3f96d0b85232d5b2b8a90343cffe077 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Y2S 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d3f96d0b85232d5b2b8a90343cffe077 1 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d3f96d0b85232d5b2b8a90343cffe077 1 00:27:48.411 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d3f96d0b85232d5b2b8a90343cffe077 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Y2S 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Y2S 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Y2S 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=55a6b21c98269a510971a6dad97ecd1770f8ed25e0f8e850 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.7yA 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 55a6b21c98269a510971a6dad97ecd1770f8ed25e0f8e850 2 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 55a6b21c98269a510971a6dad97ecd1770f8ed25e0f8e850 2 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=55a6b21c98269a510971a6dad97ecd1770f8ed25e0f8e850 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:48.412 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.7yA 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.7yA 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.7yA 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b4a4aeeee7157479fc48b9376f49f9ff 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Gtj 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b4a4aeeee7157479fc48b9376f49f9ff 0 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b4a4aeeee7157479fc48b9376f49f9ff 0 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b4a4aeeee7157479fc48b9376f49f9ff 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Gtj 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Gtj 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Gtj 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dbfc13e3125d68f4ae0e80a6c2c60d16355dcdd2aa7b86e849c7637b4fe8066a 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.p8f 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dbfc13e3125d68f4ae0e80a6c2c60d16355dcdd2aa7b86e849c7637b4fe8066a 3 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dbfc13e3125d68f4ae0e80a6c2c60d16355dcdd2aa7b86e849c7637b4fe8066a 3 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dbfc13e3125d68f4ae0e80a6c2c60d16355dcdd2aa7b86e849c7637b4fe8066a 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.p8f 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.p8f 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.p8f 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3673456 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3673456 ']' 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:48.673 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4Fe 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.771 ]] 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.771 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.mHS 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.P8x ]] 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.P8x 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.gva 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Y2S ]] 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Y2S 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.7yA 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Gtj ]] 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Gtj 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.p8f 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:48.934 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:48.935 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:52.380 Waiting for block devices as requested 00:27:52.380 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:52.380 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:52.640 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:52.640 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:52.640 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:52.901 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:52.901 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:52.901 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:53.161 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:53.161 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:53.423 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:53.423 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:53.423 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:53.423 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:53.683 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:53.683 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:53.683 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:54.626 No valid GPT data, bailing 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:54.626 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:54.886 00:27:54.886 Discovery Log Number of Records 2, Generation counter 2 00:27:54.886 =====Discovery Log Entry 0====== 00:27:54.886 trtype: tcp 00:27:54.886 adrfam: ipv4 00:27:54.886 subtype: current discovery subsystem 00:27:54.886 treq: not specified, sq flow control disable supported 00:27:54.886 portid: 1 00:27:54.886 trsvcid: 4420 00:27:54.886 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:54.886 traddr: 10.0.0.1 00:27:54.886 eflags: none 00:27:54.886 sectype: none 00:27:54.886 =====Discovery Log Entry 1====== 00:27:54.887 trtype: tcp 00:27:54.887 adrfam: ipv4 00:27:54.887 subtype: nvme subsystem 00:27:54.887 treq: not specified, sq flow control disable supported 00:27:54.887 portid: 1 00:27:54.887 trsvcid: 4420 00:27:54.887 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:54.887 traddr: 10.0.0.1 00:27:54.887 eflags: none 00:27:54.887 sectype: none 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.887 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.887 nvme0n1 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.887 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.148 nvme0n1 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.148 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.409 nvme0n1 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.409 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.410 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.410 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.410 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.410 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.410 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.410 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.410 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.410 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.410 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.410 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.671 nvme0n1 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.671 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.932 nvme0n1 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:27:55.932 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.933 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.193 nvme0n1 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.193 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.194 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.454 nvme0n1 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.454 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.715 nvme0n1 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.715 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.716 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.716 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.716 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.716 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.716 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.976 nvme0n1 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.976 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.977 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.977 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.977 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.977 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.977 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.977 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.977 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.977 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.977 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.977 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.237 nvme0n1 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.237 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.498 nvme0n1 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.498 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.499 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.762 nvme0n1 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.762 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.023 nvme0n1 00:27:58.023 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.023 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.023 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.023 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.023 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.023 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.023 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.023 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.023 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.023 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.284 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.545 nvme0n1 00:27:58.545 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.545 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.546 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.806 nvme0n1 00:27:58.806 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.806 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.806 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.806 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.806 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.806 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.807 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.807 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.068 nvme0n1 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.068 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.329 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.589 nvme0n1 00:27:59.589 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.590 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.162 nvme0n1 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.162 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.734 nvme0n1 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.734 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.995 nvme0n1 00:28:00.995 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.995 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.995 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.995 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.995 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.995 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.257 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.519 nvme0n1 00:28:01.519 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.519 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.519 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.519 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.519 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.519 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.780 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.351 nvme0n1 00:28:02.351 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.351 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.351 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.351 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.351 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.351 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.352 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.294 nvme0n1 00:28:03.294 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.294 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.294 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.294 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.294 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.294 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.295 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.866 nvme0n1 00:28:03.866 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.866 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.866 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.866 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.866 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.866 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.866 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.866 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.866 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.866 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:03.866 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.867 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.437 nvme0n1 00:28:04.437 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.437 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.438 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.438 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.438 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.438 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.438 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.438 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.438 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.438 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:04.698 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.699 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.269 nvme0n1 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.270 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.531 nvme0n1 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.531 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.792 nvme0n1 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.792 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.054 nvme0n1 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:06.054 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.055 nvme0n1 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.055 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.317 nvme0n1 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.317 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.579 nvme0n1 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.579 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.840 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.840 nvme0n1 00:28:06.840 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.840 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.840 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.840 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.840 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.840 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.840 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.840 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.840 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.840 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.101 nvme0n1 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.101 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.362 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.363 nvme0n1 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.363 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.624 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.625 nvme0n1 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.625 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.886 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.147 nvme0n1 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.147 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.148 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.410 nvme0n1 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.410 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.671 nvme0n1 00:28:08.671 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.671 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.671 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.671 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.671 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.671 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.671 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.671 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.671 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.671 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.932 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.193 nvme0n1 00:28:09.193 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.193 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.193 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.193 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.193 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.193 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.193 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.193 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.193 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.193 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.194 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.455 nvme0n1 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.455 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.028 nvme0n1 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.028 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.288 nvme0n1 00:28:10.288 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.549 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.810 nvme0n1 00:28:10.810 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.810 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.810 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.810 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.810 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.810 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.071 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.071 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.072 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.333 nvme0n1 00:28:11.333 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.333 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.333 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.333 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.333 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.333 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.333 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.333 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.333 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.333 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.595 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.857 nvme0n1 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.857 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.118 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.740 nvme0n1 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.740 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.313 nvme0n1 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.313 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.574 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.574 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.574 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.574 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.574 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.574 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.574 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.574 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.574 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.574 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.574 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.574 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.574 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.574 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.145 nvme0n1 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.145 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.717 nvme0n1 00:28:14.717 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.717 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.717 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.717 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.717 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.717 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.717 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.717 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.717 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.717 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.977 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.977 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.977 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.977 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.977 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.977 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.977 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.978 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.978 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.978 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.978 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.978 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.978 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.978 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.978 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.978 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.549 nvme0n1 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.549 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.810 nvme0n1 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.810 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.071 nvme0n1 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.071 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.332 nvme0n1 00:28:16.332 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.332 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.332 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.332 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.332 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.332 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.332 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.332 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.332 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.332 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.332 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.332 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.333 nvme0n1 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.333 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.594 nvme0n1 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.594 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.856 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.856 nvme0n1 00:28:16.856 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.856 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.856 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.856 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.856 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.856 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.856 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.856 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.856 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.856 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.121 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.122 nvme0n1 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.122 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.387 nvme0n1 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:17.387 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:17.388 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:17.388 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.388 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:17.388 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:17.388 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:28:17.388 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.649 nvme0n1 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:17.649 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.910 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.910 nvme0n1 00:28:17.910 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.910 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.910 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.910 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.910 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.910 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.910 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.910 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.910 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.910 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.170 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.430 nvme0n1 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.430 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.431 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.691 nvme0n1 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.691 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.953 nvme0n1 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:18.953 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.954 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.214 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.214 nvme0n1 00:28:19.214 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.214 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.214 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.214 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.214 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.474 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.475 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.735 nvme0n1 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.735 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.736 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.736 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.736 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.736 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.736 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.736 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.736 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.736 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.736 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.736 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.736 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.736 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:19.736 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.736 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.306 nvme0n1 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.306 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.567 nvme0n1 00:28:20.567 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.567 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.567 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.567 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.567 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.567 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.827 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.087 nvme0n1 00:28:21.087 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.087 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.087 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.087 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.087 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.087 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.087 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.087 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.087 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.087 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.346 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.607 nvme0n1 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.607 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.868 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.128 nvme0n1 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmEyZjlkOTlmZWRkNzZlZTM4MzBhZmE0YjMwZTFlM2JOZQ+e: 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: ]] 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTJiYzEwZGNhMmJhNmJiMzVkNmM5MjJkMjJmNDA3Yzc3YWZkMTkxMmQ0NzM5N2IyMGQ1YWE3ODhhNzMxNjQxNR+LFq0=: 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.128 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.069 nvme0n1 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.069 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.639 nvme0n1 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.639 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.640 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.640 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.640 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.640 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:23.640 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.640 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.210 nvme0n1 00:28:24.210 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.210 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.210 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.210 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.210 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTVhNmIyMWM5ODI2OWE1MTA5NzFhNmRhZDk3ZWNkMTc3MGY4ZWQyNWUwZjhlODUwrSNhXg==: 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: ]] 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRhNGFlZWVlNzE1NzQ3OWZjNDhiOTM3NmY0OWY5ZmZ6PMp7: 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.470 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.471 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.040 nvme0n1 00:28:25.040 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.040 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.040 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.040 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.040 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.040 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.040 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.040 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGJmYzEzZTMxMjVkNjhmNGFlMGU4MGE2YzJjNjBkMTYzNTVkY2RkMmFhN2I4NmU4NDljNzYzN2I0ZmU4MDY2YSXxDxw=: 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.041 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.612 nvme0n1 00:28:25.612 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.612 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.612 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.612 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.612 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.872 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.872 request: 00:28:25.872 { 00:28:25.873 "name": "nvme0", 00:28:25.873 "trtype": "tcp", 00:28:25.873 "traddr": "10.0.0.1", 00:28:25.873 "adrfam": "ipv4", 00:28:25.873 "trsvcid": "4420", 00:28:25.873 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:25.873 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:25.873 "prchk_reftag": false, 00:28:25.873 "prchk_guard": false, 00:28:25.873 "hdgst": false, 00:28:25.873 "ddgst": false, 00:28:25.873 "allow_unrecognized_csi": false, 00:28:25.873 "method": "bdev_nvme_attach_controller", 00:28:25.873 "req_id": 1 00:28:25.873 } 00:28:25.873 Got JSON-RPC error response 00:28:25.873 response: 00:28:25.873 { 00:28:25.873 "code": -5, 00:28:25.873 "message": "Input/output error" 00:28:25.873 } 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.873 request: 00:28:25.873 { 00:28:25.873 "name": "nvme0", 00:28:25.873 "trtype": "tcp", 00:28:25.873 "traddr": "10.0.0.1", 00:28:25.873 "adrfam": "ipv4", 00:28:25.873 "trsvcid": "4420", 00:28:25.873 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:25.873 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:25.873 "prchk_reftag": false, 00:28:25.873 "prchk_guard": false, 00:28:25.873 "hdgst": false, 00:28:25.873 "ddgst": false, 00:28:25.873 "dhchap_key": "key2", 00:28:25.873 "allow_unrecognized_csi": false, 00:28:25.873 "method": "bdev_nvme_attach_controller", 00:28:25.873 "req_id": 1 00:28:25.873 } 00:28:25.873 Got JSON-RPC error response 00:28:25.873 response: 00:28:25.873 { 00:28:25.873 "code": -5, 00:28:25.873 "message": "Input/output error" 00:28:25.873 } 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:25.873 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.133 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.134 request: 00:28:26.134 { 00:28:26.134 "name": "nvme0", 00:28:26.134 "trtype": "tcp", 00:28:26.134 "traddr": "10.0.0.1", 00:28:26.134 "adrfam": "ipv4", 00:28:26.134 "trsvcid": "4420", 00:28:26.134 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:26.134 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:26.134 "prchk_reftag": false, 00:28:26.134 "prchk_guard": false, 00:28:26.134 "hdgst": false, 00:28:26.134 "ddgst": false, 00:28:26.134 "dhchap_key": "key1", 00:28:26.134 "dhchap_ctrlr_key": "ckey2", 00:28:26.134 "allow_unrecognized_csi": false, 00:28:26.134 "method": "bdev_nvme_attach_controller", 00:28:26.134 "req_id": 1 00:28:26.134 } 00:28:26.134 Got JSON-RPC error response 00:28:26.134 response: 00:28:26.134 { 00:28:26.134 "code": -5, 00:28:26.134 "message": "Input/output error" 00:28:26.134 } 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.134 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.394 nvme0n1 00:28:26.394 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.394 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:26.394 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.394 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.394 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:26.394 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:26.394 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:26.394 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.395 request: 00:28:26.395 { 00:28:26.395 "name": "nvme0", 00:28:26.395 "dhchap_key": "key1", 00:28:26.395 "dhchap_ctrlr_key": "ckey2", 00:28:26.395 "method": "bdev_nvme_set_keys", 00:28:26.395 "req_id": 1 00:28:26.395 } 00:28:26.395 Got JSON-RPC error response 00:28:26.395 response: 00:28:26.395 { 00:28:26.395 "code": -13, 00:28:26.395 "message": "Permission denied" 00:28:26.395 } 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:26.395 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:27.776 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.776 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:27.776 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.776 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.776 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.776 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:27.776 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNmVlZGViOGMxZDc3ZTBiMjM5OGY5OTk0NTgzMDU3ZmY0MDhkNWZjZDY1ZGYyVHWM+A==: 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: ]] 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDE1Zjk3MjNiMDMzYTRiMDkxYTI2NDQ2YjU2MDRiMjI1ZDkzNzE4OTRlOGExOWMywstATQ==: 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.777 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.778 nvme0n1 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTcwNjhjYjliZWMzYWM3YTZmMWJiYTNmMzhkNWM4MGPTJopR: 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: ]] 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNmOTZkMGI4NTIzMmQ1YjJiOGE5MDM0M2NmZmUwNzfnE7fH: 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.778 request: 00:28:28.778 { 00:28:28.778 "name": "nvme0", 00:28:28.778 "dhchap_key": "key2", 00:28:28.778 "dhchap_ctrlr_key": "ckey1", 00:28:28.778 "method": "bdev_nvme_set_keys", 00:28:28.778 "req_id": 1 00:28:28.778 } 00:28:28.778 Got JSON-RPC error response 00:28:28.778 response: 00:28:28.778 { 00:28:28.778 "code": -13, 00:28:28.778 "message": "Permission denied" 00:28:28.778 } 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.778 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.778 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:28.778 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:30.165 rmmod nvme_tcp 00:28:30.165 rmmod nvme_fabrics 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3673456 ']' 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3673456 00:28:30.165 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 3673456 ']' 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 3673456 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3673456 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3673456' 00:28:30.166 killing process with pid 3673456 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 3673456 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 3673456 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.166 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.079 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:32.340 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:32.340 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:32.340 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:32.340 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:32.340 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:32.340 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:32.340 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:32.340 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:32.340 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:32.340 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:32.340 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:32.340 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:35.645 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:35.645 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:35.645 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:35.906 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:35.906 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:35.906 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:35.906 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:35.906 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:35.906 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:35.906 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:35.906 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:35.906 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:35.906 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:35.906 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:35.906 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:35.906 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:35.906 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:36.478 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.4Fe /tmp/spdk.key-null.mHS /tmp/spdk.key-sha256.gva /tmp/spdk.key-sha384.7yA /tmp/spdk.key-sha512.p8f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:36.478 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:39.784 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:39.784 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:39.784 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:40.047 00:28:40.047 real 1m0.743s 00:28:40.047 user 0m54.553s 00:28:40.047 sys 0m16.040s 00:28:40.047 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:40.047 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.047 ************************************ 00:28:40.047 END TEST nvmf_auth_host 00:28:40.047 ************************************ 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.308 ************************************ 00:28:40.308 START TEST nvmf_digest 00:28:40.308 ************************************ 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:40.308 * Looking for test storage... 00:28:40.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:40.308 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:40.309 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:40.309 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:40.309 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:40.309 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:40.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.309 --rc genhtml_branch_coverage=1 00:28:40.309 --rc genhtml_function_coverage=1 00:28:40.309 --rc genhtml_legend=1 00:28:40.309 --rc geninfo_all_blocks=1 00:28:40.309 --rc geninfo_unexecuted_blocks=1 00:28:40.309 00:28:40.309 ' 00:28:40.309 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:40.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.309 --rc genhtml_branch_coverage=1 00:28:40.309 --rc genhtml_function_coverage=1 00:28:40.309 --rc genhtml_legend=1 00:28:40.309 --rc geninfo_all_blocks=1 00:28:40.309 --rc geninfo_unexecuted_blocks=1 00:28:40.309 00:28:40.309 ' 00:28:40.309 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:40.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.309 --rc genhtml_branch_coverage=1 00:28:40.309 --rc genhtml_function_coverage=1 00:28:40.309 --rc genhtml_legend=1 00:28:40.309 --rc geninfo_all_blocks=1 00:28:40.309 --rc geninfo_unexecuted_blocks=1 00:28:40.309 00:28:40.309 ' 00:28:40.309 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:40.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.309 --rc genhtml_branch_coverage=1 00:28:40.309 --rc genhtml_function_coverage=1 00:28:40.309 --rc genhtml_legend=1 00:28:40.309 --rc geninfo_all_blocks=1 00:28:40.309 --rc geninfo_unexecuted_blocks=1 00:28:40.309 00:28:40.309 ' 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:40.571 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:48.742 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:48.742 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:48.742 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:48.742 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:48.742 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.742 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.742 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.742 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:48.742 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:48.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:28:48.742 00:28:48.742 --- 10.0.0.2 ping statistics --- 00:28:48.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.742 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:28:48.742 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:28:48.742 00:28:48.742 --- 10.0.0.1 ping statistics --- 00:28:48.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.742 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:28:48.742 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.742 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:48.742 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:48.742 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:48.743 ************************************ 00:28:48.743 START TEST nvmf_digest_clean 00:28:48.743 ************************************ 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3690883 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3690883 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3690883 ']' 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:48.743 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:48.743 [2024-11-20 07:27:10.271707] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:28:48.743 [2024-11-20 07:27:10.271770] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.743 [2024-11-20 07:27:10.371822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.743 [2024-11-20 07:27:10.424978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.743 [2024-11-20 07:27:10.425032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.743 [2024-11-20 07:27:10.425040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.743 [2024-11-20 07:27:10.425047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.743 [2024-11-20 07:27:10.425054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.743 [2024-11-20 07:27:10.425815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:49.004 null0 00:28:49.004 [2024-11-20 07:27:11.240933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.004 [2024-11-20 07:27:11.265267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3691048 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3691048 /var/tmp/bperf.sock 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3691048 ']' 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:49.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:49.004 07:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:49.266 [2024-11-20 07:27:11.325856] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:28:49.266 [2024-11-20 07:27:11.325921] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3691048 ] 00:28:49.266 [2024-11-20 07:27:11.416739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.266 [2024-11-20 07:27:11.468373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.209 07:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:50.209 07:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:50.209 07:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:50.209 07:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:50.209 07:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:50.209 07:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.209 07:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.470 nvme0n1 00:28:50.470 07:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:50.470 07:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:50.732 Running I/O for 2 seconds... 00:28:52.617 19315.00 IOPS, 75.45 MiB/s [2024-11-20T06:27:14.895Z] 19328.50 IOPS, 75.50 MiB/s 00:28:52.617 Latency(us) 00:28:52.617 [2024-11-20T06:27:14.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.617 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:52.617 nvme0n1 : 2.05 18948.05 74.02 0.00 0.00 6615.88 3017.39 48059.73 00:28:52.617 [2024-11-20T06:27:14.895Z] =================================================================================================================== 00:28:52.617 [2024-11-20T06:27:14.895Z] Total : 18948.05 74.02 0.00 0.00 6615.88 3017.39 48059.73 00:28:52.617 { 00:28:52.617 "results": [ 00:28:52.617 { 00:28:52.617 "job": "nvme0n1", 00:28:52.617 "core_mask": "0x2", 00:28:52.617 "workload": "randread", 00:28:52.617 "status": "finished", 00:28:52.617 "queue_depth": 128, 00:28:52.617 "io_size": 4096, 00:28:52.617 "runtime": 2.046912, 00:28:52.617 "iops": 18948.054435168684, 00:28:52.617 "mibps": 74.01583763737767, 00:28:52.617 "io_failed": 0, 00:28:52.617 "io_timeout": 0, 00:28:52.617 "avg_latency_us": 6615.882926560956, 00:28:52.617 "min_latency_us": 3017.3866666666668, 00:28:52.617 "max_latency_us": 48059.73333333333 00:28:52.617 } 00:28:52.617 ], 00:28:52.617 "core_count": 1 00:28:52.617 } 00:28:52.617 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:52.617 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:52.617 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:52.617 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:52.617 | select(.opcode=="crc32c") 00:28:52.617 | "\(.module_name) \(.executed)"' 00:28:52.617 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3691048 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3691048 ']' 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3691048 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3691048 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3691048' 00:28:52.879 killing process with pid 3691048 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3691048 00:28:52.879 Received shutdown signal, test time was about 2.000000 seconds 00:28:52.879 00:28:52.879 Latency(us) 00:28:52.879 [2024-11-20T06:27:15.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.879 [2024-11-20T06:27:15.157Z] =================================================================================================================== 00:28:52.879 [2024-11-20T06:27:15.157Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:52.879 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3691048 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3691870 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3691870 /var/tmp/bperf.sock 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3691870 ']' 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:53.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:53.140 07:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:53.140 [2024-11-20 07:27:15.238326] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:28:53.140 [2024-11-20 07:27:15.238384] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3691870 ] 00:28:53.140 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:53.140 Zero copy mechanism will not be used. 00:28:53.140 [2024-11-20 07:27:15.324179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.140 [2024-11-20 07:27:15.359754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.082 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:54.082 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:54.082 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:54.082 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:54.082 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:54.082 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.082 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.654 nvme0n1 00:28:54.654 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:54.654 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:54.654 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:54.654 Zero copy mechanism will not be used. 00:28:54.654 Running I/O for 2 seconds... 00:28:56.539 5708.00 IOPS, 713.50 MiB/s [2024-11-20T06:27:18.817Z] 5903.50 IOPS, 737.94 MiB/s 00:28:56.539 Latency(us) 00:28:56.539 [2024-11-20T06:27:18.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.539 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:56.539 nvme0n1 : 2.00 5905.69 738.21 0.00 0.00 2706.66 457.39 9994.24 00:28:56.539 [2024-11-20T06:27:18.817Z] =================================================================================================================== 00:28:56.539 [2024-11-20T06:27:18.817Z] Total : 5905.69 738.21 0.00 0.00 2706.66 457.39 9994.24 00:28:56.539 { 00:28:56.539 "results": [ 00:28:56.539 { 00:28:56.539 "job": "nvme0n1", 00:28:56.539 "core_mask": "0x2", 00:28:56.539 "workload": "randread", 00:28:56.539 "status": "finished", 00:28:56.539 "queue_depth": 16, 00:28:56.539 "io_size": 131072, 00:28:56.539 "runtime": 2.001966, 00:28:56.539 "iops": 5905.694702107828, 00:28:56.539 "mibps": 738.2118377634785, 00:28:56.539 "io_failed": 0, 00:28:56.539 "io_timeout": 0, 00:28:56.539 "avg_latency_us": 2706.6564086949165, 00:28:56.539 "min_latency_us": 457.38666666666666, 00:28:56.539 "max_latency_us": 9994.24 00:28:56.539 } 00:28:56.539 ], 00:28:56.539 "core_count": 1 00:28:56.539 } 00:28:56.539 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:56.539 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:56.539 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:56.539 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:56.539 | select(.opcode=="crc32c") 00:28:56.539 | "\(.module_name) \(.executed)"' 00:28:56.540 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:56.801 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:56.801 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:56.801 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:56.801 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:56.801 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3691870 00:28:56.801 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3691870 ']' 00:28:56.801 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3691870 00:28:56.801 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:56.801 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:56.801 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3691870 00:28:56.801 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:56.801 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:56.801 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3691870' 00:28:56.801 killing process with pid 3691870 00:28:56.801 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3691870 00:28:56.801 Received shutdown signal, test time was about 2.000000 seconds 00:28:56.801 00:28:56.801 Latency(us) 00:28:56.801 [2024-11-20T06:27:19.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.801 [2024-11-20T06:27:19.079Z] =================================================================================================================== 00:28:56.801 [2024-11-20T06:27:19.079Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:56.801 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3691870 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3692731 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3692731 /var/tmp/bperf.sock 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3692731 ']' 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:57.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:57.061 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.061 [2024-11-20 07:27:19.151180] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:28:57.061 [2024-11-20 07:27:19.151238] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3692731 ] 00:28:57.061 [2024-11-20 07:27:19.234831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.061 [2024-11-20 07:27:19.264082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.003 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:58.003 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:58.003 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:58.003 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:58.003 07:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:58.003 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.003 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.264 nvme0n1 00:28:58.525 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:58.525 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:58.525 Running I/O for 2 seconds... 00:29:00.411 30150.00 IOPS, 117.77 MiB/s [2024-11-20T06:27:22.689Z] 30298.50 IOPS, 118.35 MiB/s 00:29:00.411 Latency(us) 00:29:00.411 [2024-11-20T06:27:22.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.411 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:00.411 nvme0n1 : 2.00 30318.92 118.43 0.00 0.00 4216.12 1706.67 14417.92 00:29:00.411 [2024-11-20T06:27:22.689Z] =================================================================================================================== 00:29:00.411 [2024-11-20T06:27:22.689Z] Total : 30318.92 118.43 0.00 0.00 4216.12 1706.67 14417.92 00:29:00.411 { 00:29:00.411 "results": [ 00:29:00.411 { 00:29:00.411 "job": "nvme0n1", 00:29:00.411 "core_mask": "0x2", 00:29:00.411 "workload": "randwrite", 00:29:00.411 "status": "finished", 00:29:00.411 "queue_depth": 128, 00:29:00.411 "io_size": 4096, 00:29:00.411 "runtime": 2.004953, 00:29:00.411 "iops": 30318.915206491125, 00:29:00.411 "mibps": 118.43326252535596, 00:29:00.411 "io_failed": 0, 00:29:00.411 "io_timeout": 0, 00:29:00.411 "avg_latency_us": 4216.121280077208, 00:29:00.411 "min_latency_us": 1706.6666666666667, 00:29:00.411 "max_latency_us": 14417.92 00:29:00.411 } 00:29:00.411 ], 00:29:00.411 "core_count": 1 00:29:00.411 } 00:29:00.411 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:00.411 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:00.411 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:00.411 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:00.411 | select(.opcode=="crc32c") 00:29:00.411 | "\(.module_name) \(.executed)"' 00:29:00.411 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:00.672 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:00.672 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:00.672 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:00.672 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:00.672 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3692731 00:29:00.672 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3692731 ']' 00:29:00.672 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3692731 00:29:00.672 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:00.673 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:00.673 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3692731 00:29:00.673 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:00.673 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:00.673 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3692731' 00:29:00.673 killing process with pid 3692731 00:29:00.673 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3692731 00:29:00.673 Received shutdown signal, test time was about 2.000000 seconds 00:29:00.673 00:29:00.673 Latency(us) 00:29:00.673 [2024-11-20T06:27:22.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.673 [2024-11-20T06:27:22.951Z] =================================================================================================================== 00:29:00.673 [2024-11-20T06:27:22.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.673 07:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3692731 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3693421 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3693421 /var/tmp/bperf.sock 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3693421 ']' 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:00.934 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.934 [2024-11-20 07:27:23.068641] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:29:00.934 [2024-11-20 07:27:23.068698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3693421 ] 00:29:00.934 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:00.934 Zero copy mechanism will not be used. 00:29:00.934 [2024-11-20 07:27:23.153594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.934 [2024-11-20 07:27:23.183781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.877 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:01.877 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:01.877 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:01.877 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:01.877 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:01.877 07:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.877 07:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.138 nvme0n1 00:29:02.138 07:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:02.138 07:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:02.138 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:02.138 Zero copy mechanism will not be used. 00:29:02.138 Running I/O for 2 seconds... 00:29:04.465 4706.00 IOPS, 588.25 MiB/s [2024-11-20T06:27:26.743Z] 5144.00 IOPS, 643.00 MiB/s 00:29:04.465 Latency(us) 00:29:04.465 [2024-11-20T06:27:26.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.465 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:04.465 nvme0n1 : 2.01 5136.93 642.12 0.00 0.00 3108.68 1249.28 9065.81 00:29:04.465 [2024-11-20T06:27:26.743Z] =================================================================================================================== 00:29:04.465 [2024-11-20T06:27:26.743Z] Total : 5136.93 642.12 0.00 0.00 3108.68 1249.28 9065.81 00:29:04.465 { 00:29:04.465 "results": [ 00:29:04.465 { 00:29:04.465 "job": "nvme0n1", 00:29:04.465 "core_mask": "0x2", 00:29:04.465 "workload": "randwrite", 00:29:04.465 "status": "finished", 00:29:04.465 "queue_depth": 16, 00:29:04.465 "io_size": 131072, 00:29:04.465 "runtime": 2.006451, 00:29:04.465 "iops": 5136.930829609096, 00:29:04.465 "mibps": 642.116353701137, 00:29:04.465 "io_failed": 0, 00:29:04.465 "io_timeout": 0, 00:29:04.465 "avg_latency_us": 3108.680997380421, 00:29:04.465 "min_latency_us": 1249.28, 00:29:04.465 "max_latency_us": 9065.813333333334 00:29:04.465 } 00:29:04.465 ], 00:29:04.465 "core_count": 1 00:29:04.465 } 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:04.465 | select(.opcode=="crc32c") 00:29:04.465 | "\(.module_name) \(.executed)"' 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3693421 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3693421 ']' 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3693421 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3693421 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3693421' 00:29:04.465 killing process with pid 3693421 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3693421 00:29:04.465 Received shutdown signal, test time was about 2.000000 seconds 00:29:04.465 00:29:04.465 Latency(us) 00:29:04.465 [2024-11-20T06:27:26.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.465 [2024-11-20T06:27:26.743Z] =================================================================================================================== 00:29:04.465 [2024-11-20T06:27:26.743Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.465 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3693421 00:29:04.726 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3690883 00:29:04.726 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3690883 ']' 00:29:04.726 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3690883 00:29:04.726 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:04.726 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:04.726 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3690883 00:29:04.726 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:04.726 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:04.726 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3690883' 00:29:04.726 killing process with pid 3690883 00:29:04.726 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3690883 00:29:04.726 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3690883 00:29:04.726 00:29:04.726 real 0m16.757s 00:29:04.726 user 0m32.991s 00:29:04.726 sys 0m3.859s 00:29:04.726 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:04.726 07:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:04.726 ************************************ 00:29:04.726 END TEST nvmf_digest_clean 00:29:04.726 ************************************ 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:04.987 ************************************ 00:29:04.987 START TEST nvmf_digest_error 00:29:04.987 ************************************ 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3694129 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3694129 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3694129 ']' 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:04.987 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.987 [2024-11-20 07:27:27.103150] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:29:04.987 [2024-11-20 07:27:27.103201] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.987 [2024-11-20 07:27:27.192926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.987 [2024-11-20 07:27:27.221473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.987 [2024-11-20 07:27:27.221498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.987 [2024-11-20 07:27:27.221504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.987 [2024-11-20 07:27:27.221509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.987 [2024-11-20 07:27:27.221513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.987 [2024-11-20 07:27:27.221944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.928 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:05.928 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:05.928 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:05.928 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:05.928 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.928 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.928 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:05.928 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.928 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.928 [2024-11-20 07:27:27.931884] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:05.928 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.928 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:05.928 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:05.928 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.928 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.928 null0 00:29:05.928 [2024-11-20 07:27:28.009529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.928 [2024-11-20 07:27:28.033745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3694478 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3694478 /var/tmp/bperf.sock 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3694478 ']' 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:05.928 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.928 [2024-11-20 07:27:28.088600] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:29:05.928 [2024-11-20 07:27:28.088649] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3694478 ] 00:29:05.928 [2024-11-20 07:27:28.173854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.189 [2024-11-20 07:27:28.203553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.762 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:06.762 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:06.762 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:06.762 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:07.023 07:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:07.023 07:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.023 07:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.023 07:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.023 07:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.023 07:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.284 nvme0n1 00:29:07.284 07:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:07.284 07:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.284 07:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.284 07:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.284 07:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:07.284 07:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:07.284 Running I/O for 2 seconds... 00:29:07.545 [2024-11-20 07:27:29.567712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.545 [2024-11-20 07:27:29.567744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.545 [2024-11-20 07:27:29.567753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.545 [2024-11-20 07:27:29.577781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.545 [2024-11-20 07:27:29.577802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.545 [2024-11-20 07:27:29.577809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.545 [2024-11-20 07:27:29.586727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.545 [2024-11-20 07:27:29.586745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.545 [2024-11-20 07:27:29.586751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.545 [2024-11-20 07:27:29.596270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.545 [2024-11-20 07:27:29.596288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.545 [2024-11-20 07:27:29.596295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.545 [2024-11-20 07:27:29.605451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.545 [2024-11-20 07:27:29.605469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.545 [2024-11-20 07:27:29.605476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.545 [2024-11-20 07:27:29.613506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.545 [2024-11-20 07:27:29.613525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.545 [2024-11-20 07:27:29.613531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.545 [2024-11-20 07:27:29.624212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.545 [2024-11-20 07:27:29.624229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.545 [2024-11-20 07:27:29.624243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.545 [2024-11-20 07:27:29.633535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.545 [2024-11-20 07:27:29.633554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.545 [2024-11-20 07:27:29.633560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.545 [2024-11-20 07:27:29.642471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.545 [2024-11-20 07:27:29.642490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.545 [2024-11-20 07:27:29.642496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.653750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.653769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.653776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.661394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.661412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.661419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.670276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.670294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.670300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.680811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.680828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.680834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.689421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.689438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.689445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.698293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.698310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.698316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.707334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.707351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.707357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.715776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.715793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.715799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.724645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.724663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.724669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.736057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.736074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.736080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.746009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.746027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.746033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.756015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.756032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.756038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.765113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.765131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.765138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.774156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.774176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.774182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.783853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.783870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.783880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.791623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.791640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.791647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.801354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.801370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.801377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.546 [2024-11-20 07:27:29.812140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.546 [2024-11-20 07:27:29.812157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.546 [2024-11-20 07:27:29.812167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.821916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.821934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.821941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.830955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.830973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.830979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.839081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.839098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.839105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.848401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.848418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.848424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.857649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.857666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.857673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.866375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.866395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.866402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.875305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.875322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.875328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.883778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.883795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.883801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.893195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.893212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.893219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.903039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.903057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.903063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.911163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.911180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.911186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.919903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.919920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.919927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.931691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.931708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.931714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.940111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.940129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.940135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.952447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.952464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.952471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.965665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.965683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.965690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.976109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.976126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.976133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.985438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.985455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.985461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:29.995425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:29.995442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:29.995448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:30.006053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:30.006071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:30.006078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:30.013579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:30.013596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:30.013602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:30.023638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:30.023655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.807 [2024-11-20 07:27:30.023662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.807 [2024-11-20 07:27:30.032188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.807 [2024-11-20 07:27:30.032205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.808 [2024-11-20 07:27:30.032216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.808 [2024-11-20 07:27:30.041139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.808 [2024-11-20 07:27:30.041156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.808 [2024-11-20 07:27:30.041166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.808 [2024-11-20 07:27:30.050540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.808 [2024-11-20 07:27:30.050559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.808 [2024-11-20 07:27:30.050569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.808 [2024-11-20 07:27:30.058954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.808 [2024-11-20 07:27:30.058972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.808 [2024-11-20 07:27:30.058979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.808 [2024-11-20 07:27:30.067655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.808 [2024-11-20 07:27:30.067672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.808 [2024-11-20 07:27:30.067679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.808 [2024-11-20 07:27:30.077464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:07.808 [2024-11-20 07:27:30.077482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.808 [2024-11-20 07:27:30.077488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.070 [2024-11-20 07:27:30.089271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.070 [2024-11-20 07:27:30.089288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.070 [2024-11-20 07:27:30.089295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.070 [2024-11-20 07:27:30.096926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.070 [2024-11-20 07:27:30.096942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.070 [2024-11-20 07:27:30.096949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.070 [2024-11-20 07:27:30.106285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.070 [2024-11-20 07:27:30.106303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.070 [2024-11-20 07:27:30.106309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.070 [2024-11-20 07:27:30.115406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.070 [2024-11-20 07:27:30.115426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.070 [2024-11-20 07:27:30.115433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.070 [2024-11-20 07:27:30.123370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.070 [2024-11-20 07:27:30.123387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.070 [2024-11-20 07:27:30.123394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.070 [2024-11-20 07:27:30.132807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.070 [2024-11-20 07:27:30.132825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.070 [2024-11-20 07:27:30.132831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.070 [2024-11-20 07:27:30.142813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.070 [2024-11-20 07:27:30.142830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.070 [2024-11-20 07:27:30.142836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.070 [2024-11-20 07:27:30.151800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.070 [2024-11-20 07:27:30.151817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.070 [2024-11-20 07:27:30.151824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.070 [2024-11-20 07:27:30.161620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.070 [2024-11-20 07:27:30.161637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.070 [2024-11-20 07:27:30.161644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.070 [2024-11-20 07:27:30.170357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.070 [2024-11-20 07:27:30.170374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.070 [2024-11-20 07:27:30.170380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.070 [2024-11-20 07:27:30.178730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.070 [2024-11-20 07:27:30.178747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.070 [2024-11-20 07:27:30.178753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.070 [2024-11-20 07:27:30.187859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.070 [2024-11-20 07:27:30.187876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.187883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.196552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.196569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.196576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.205482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.205499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.205505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.214411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.214428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.214435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.223211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.223228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.223235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.232267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.232284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.232290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.240840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.240857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.240863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.249960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.249978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.249984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.258619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.258636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.258642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.268570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.268588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.268598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.277421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.277438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.277445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.286408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.286425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.286431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.298037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.298055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.298061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.309408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.309425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.309432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.317370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.317386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.317393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.328763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.328781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.328787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.071 [2024-11-20 07:27:30.336968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.071 [2024-11-20 07:27:30.336985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.071 [2024-11-20 07:27:30.336992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.345585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.345603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.345609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.354483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.354500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.354506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.362920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.362937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.362943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.372327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.372344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.372350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.381949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.381966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.381973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.390863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.390880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.390886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.399425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.399442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.399448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.409204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.409221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.409228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.421474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.421491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.421498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.430369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.430387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.430396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.438057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.438074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.438081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.447419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.447436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.447442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.457532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.457549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.457555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.465681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.465698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.465704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.474566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.474583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.474590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.483303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.483320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.483326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.491842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.491859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.491865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.501199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.501216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.501222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.509242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.334 [2024-11-20 07:27:30.509262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.334 [2024-11-20 07:27:30.509269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.334 [2024-11-20 07:27:30.518530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.335 [2024-11-20 07:27:30.518549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.335 [2024-11-20 07:27:30.518557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.335 [2024-11-20 07:27:30.527650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.335 [2024-11-20 07:27:30.527668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.335 [2024-11-20 07:27:30.527674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.335 [2024-11-20 07:27:30.537439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.335 [2024-11-20 07:27:30.537456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.335 [2024-11-20 07:27:30.537463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.335 [2024-11-20 07:27:30.546211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.335 [2024-11-20 07:27:30.546228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.335 [2024-11-20 07:27:30.546234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.335 27241.00 IOPS, 106.41 MiB/s [2024-11-20T06:27:30.613Z] [2024-11-20 07:27:30.555400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.335 [2024-11-20 07:27:30.555416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.335 [2024-11-20 07:27:30.555423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.335 [2024-11-20 07:27:30.566587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.335 [2024-11-20 07:27:30.566604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.335 [2024-11-20 07:27:30.566611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.335 [2024-11-20 07:27:30.577399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.335 [2024-11-20 07:27:30.577415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.335 [2024-11-20 07:27:30.577422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.335 [2024-11-20 07:27:30.586248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.335 [2024-11-20 07:27:30.586264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.335 [2024-11-20 07:27:30.586271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.335 [2024-11-20 07:27:30.596000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.335 [2024-11-20 07:27:30.596018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.335 [2024-11-20 07:27:30.596024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.335 [2024-11-20 07:27:30.605865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.335 [2024-11-20 07:27:30.605882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.335 [2024-11-20 07:27:30.605889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.614275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.614292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.614298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.622950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.622967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.622973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.632459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.632475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.632482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.640661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.640678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.640685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.650876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.650893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.650899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.660570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.660587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.660594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.669256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.669274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.669284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.677233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.677250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.677257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.687783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.687800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.687806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.695559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.695577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.695583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.704825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.704843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.704852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.714051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.714069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.714076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.722549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.722567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.722573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.730994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.731012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.731018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.739520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.739538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.739545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.748884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.748904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.748911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.757855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.757872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.757879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.766727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.766744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.766750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.776721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.776738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.776745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.785277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.785295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.785301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.794202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.794220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.794226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.802843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.802860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.802867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.811170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.811188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.811194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.822992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.823010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.597 [2024-11-20 07:27:30.823023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.597 [2024-11-20 07:27:30.833653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.597 [2024-11-20 07:27:30.833671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.598 [2024-11-20 07:27:30.833677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.598 [2024-11-20 07:27:30.841778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.598 [2024-11-20 07:27:30.841796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.598 [2024-11-20 07:27:30.841802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.598 [2024-11-20 07:27:30.851112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.598 [2024-11-20 07:27:30.851130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.598 [2024-11-20 07:27:30.851136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.598 [2024-11-20 07:27:30.860721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.598 [2024-11-20 07:27:30.860738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.598 [2024-11-20 07:27:30.860744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.598 [2024-11-20 07:27:30.869610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.598 [2024-11-20 07:27:30.869628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.598 [2024-11-20 07:27:30.869634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.860 [2024-11-20 07:27:30.878592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.860 [2024-11-20 07:27:30.878610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.860 [2024-11-20 07:27:30.878617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.860 [2024-11-20 07:27:30.887808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.860 [2024-11-20 07:27:30.887825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.860 [2024-11-20 07:27:30.887832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.860 [2024-11-20 07:27:30.896302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.860 [2024-11-20 07:27:30.896319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.860 [2024-11-20 07:27:30.896326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.860 [2024-11-20 07:27:30.904942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.860 [2024-11-20 07:27:30.904963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.860 [2024-11-20 07:27:30.904969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.860 [2024-11-20 07:27:30.913580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.860 [2024-11-20 07:27:30.913597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.860 [2024-11-20 07:27:30.913603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.860 [2024-11-20 07:27:30.922834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.860 [2024-11-20 07:27:30.922852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:30.922858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:30.931851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:30.931869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:30.931875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:30.940713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:30.940730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:30.940736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:30.949481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:30.949499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:30.949505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:30.959209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:30.959227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:30.959233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:30.967684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:30.967701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:30.967708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:30.976180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:30.976197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:30.976204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:30.985155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:30.985177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:30.985183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:30.993702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:30.993720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:30.993726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.001962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.001980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.001987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.011565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.011583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.011589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.021153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.021175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.021182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.030141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.030163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.030170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.038474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.038491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.038498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.047196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.047213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.047220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.057272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.057289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.057299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.066385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.066403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.066409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.075165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.075182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.075189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.084092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.084109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.084116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.093471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.093488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.093495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.102797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.102815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.102821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.111070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.111087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.111094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.120322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.120339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.120346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.861 [2024-11-20 07:27:31.129161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:08.861 [2024-11-20 07:27:31.129178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.861 [2024-11-20 07:27:31.129186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.123 [2024-11-20 07:27:31.138199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.123 [2024-11-20 07:27:31.138219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.123 [2024-11-20 07:27:31.138226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.123 [2024-11-20 07:27:31.147317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.123 [2024-11-20 07:27:31.147334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.123 [2024-11-20 07:27:31.147340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.123 [2024-11-20 07:27:31.156329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.123 [2024-11-20 07:27:31.156346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.123 [2024-11-20 07:27:31.156352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.123 [2024-11-20 07:27:31.165047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.123 [2024-11-20 07:27:31.165064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.123 [2024-11-20 07:27:31.165071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.123 [2024-11-20 07:27:31.176115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.123 [2024-11-20 07:27:31.176132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.123 [2024-11-20 07:27:31.176138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.123 [2024-11-20 07:27:31.185312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.123 [2024-11-20 07:27:31.185329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.123 [2024-11-20 07:27:31.185335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.123 [2024-11-20 07:27:31.194841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.123 [2024-11-20 07:27:31.194858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.123 [2024-11-20 07:27:31.194864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.123 [2024-11-20 07:27:31.202695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.123 [2024-11-20 07:27:31.202712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.123 [2024-11-20 07:27:31.202719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.123 [2024-11-20 07:27:31.214909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.123 [2024-11-20 07:27:31.214927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.123 [2024-11-20 07:27:31.214933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.123 [2024-11-20 07:27:31.225886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.123 [2024-11-20 07:27:31.225904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.123 [2024-11-20 07:27:31.225910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.123 [2024-11-20 07:27:31.235625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.123 [2024-11-20 07:27:31.235642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.123 [2024-11-20 07:27:31.235648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.123 [2024-11-20 07:27:31.244693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.123 [2024-11-20 07:27:31.244711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.123 [2024-11-20 07:27:31.244717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.123 [2024-11-20 07:27:31.253341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.123 [2024-11-20 07:27:31.253359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.253365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.262589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.262607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.262613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.270593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.270610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.270616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.280126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.280144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.280150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.288572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.288589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.288595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.297385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.297402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.297411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.305827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.305845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.305851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.314793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.314810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.314817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.323919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.323937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.323943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.332678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.332696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.332702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.341883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.341901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.341907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.351264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.351281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.351288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.360588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.360605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.360612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.369713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.369730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.369737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.378990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.379011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.379017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.124 [2024-11-20 07:27:31.386856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.124 [2024-11-20 07:27:31.386873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.124 [2024-11-20 07:27:31.386880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.385 [2024-11-20 07:27:31.396863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.396882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.396892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.406344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.406362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.406370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.414914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.414931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.414938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.422801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.422818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.422825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.432318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.432335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.432341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.442766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.442783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.442789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.451452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.451468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.451475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.459137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.459154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.459165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.468322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.468339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.468345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.477290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.477308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.477314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.485595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.485612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.485619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.494570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.494587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.494593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.503721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.503738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.503744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.513318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.513335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.513341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.523011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.523028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.523035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.531930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.531949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.531955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.542819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.542838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.542844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 [2024-11-20 07:27:31.551427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.551444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.551451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 27603.00 IOPS, 107.82 MiB/s [2024-11-20T06:27:31.664Z] [2024-11-20 07:27:31.559236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dd0e0) 00:29:09.386 [2024-11-20 07:27:31.559254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.386 [2024-11-20 07:27:31.559260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.386 00:29:09.386 Latency(us) 00:29:09.386 [2024-11-20T06:27:31.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.386 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:09.386 nvme0n1 : 2.04 27076.00 105.77 0.00 0.00 4628.91 2129.92 43472.21 00:29:09.386 [2024-11-20T06:27:31.664Z] =================================================================================================================== 00:29:09.386 [2024-11-20T06:27:31.664Z] Total : 27076.00 105.77 0.00 0.00 4628.91 2129.92 43472.21 00:29:09.386 { 00:29:09.386 "results": [ 00:29:09.386 { 00:29:09.386 "job": "nvme0n1", 00:29:09.386 "core_mask": "0x2", 00:29:09.386 "workload": "randread", 00:29:09.386 "status": "finished", 00:29:09.386 "queue_depth": 128, 00:29:09.386 "io_size": 4096, 00:29:09.386 "runtime": 2.044578, 00:29:09.386 "iops": 27076.002969805995, 00:29:09.386 "mibps": 105.76563660080467, 00:29:09.386 "io_failed": 0, 00:29:09.386 "io_timeout": 0, 00:29:09.386 "avg_latency_us": 4628.906797690228, 00:29:09.386 "min_latency_us": 2129.92, 00:29:09.386 "max_latency_us": 43472.21333333333 00:29:09.386 } 00:29:09.386 ], 00:29:09.386 "core_count": 1 00:29:09.386 } 00:29:09.386 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:09.386 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:09.386 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:09.386 | .driver_specific 00:29:09.386 | .nvme_error 00:29:09.386 | .status_code 00:29:09.386 | .command_transient_transport_error' 00:29:09.386 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:09.647 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:29:09.647 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3694478 00:29:09.647 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3694478 ']' 00:29:09.647 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3694478 00:29:09.647 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:09.647 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:09.648 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3694478 00:29:09.648 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:09.648 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:09.648 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3694478' 00:29:09.648 killing process with pid 3694478 00:29:09.648 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3694478 00:29:09.648 Received shutdown signal, test time was about 2.000000 seconds 00:29:09.648 00:29:09.648 Latency(us) 00:29:09.648 [2024-11-20T06:27:31.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.648 [2024-11-20T06:27:31.926Z] =================================================================================================================== 00:29:09.648 [2024-11-20T06:27:31.926Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:09.648 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3694478 00:29:09.908 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:09.908 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:09.908 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:09.908 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:09.908 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:09.908 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3695159 00:29:09.908 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3695159 /var/tmp/bperf.sock 00:29:09.908 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3695159 ']' 00:29:09.908 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:09.908 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:09.908 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:09.908 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:09.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:09.908 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:09.908 07:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.908 [2024-11-20 07:27:32.017092] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:29:09.909 [2024-11-20 07:27:32.017144] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3695159 ] 00:29:09.909 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.909 Zero copy mechanism will not be used. 00:29:09.909 [2024-11-20 07:27:32.099045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.909 [2024-11-20 07:27:32.128001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.853 07:27:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:10.853 07:27:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:10.853 07:27:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:10.853 07:27:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:10.853 07:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:10.853 07:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.853 07:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.853 07:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.853 07:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.853 07:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.425 nvme0n1 00:29:11.425 07:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:11.425 07:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.425 07:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.425 07:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.425 07:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:11.425 07:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:11.425 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:11.425 Zero copy mechanism will not be used. 00:29:11.426 Running I/O for 2 seconds... 00:29:11.426 [2024-11-20 07:27:33.568992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.426 [2024-11-20 07:27:33.569026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.426 [2024-11-20 07:27:33.569035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.426 [2024-11-20 07:27:33.580305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.426 [2024-11-20 07:27:33.580329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.426 [2024-11-20 07:27:33.580336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.426 [2024-11-20 07:27:33.591223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.426 [2024-11-20 07:27:33.591243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.426 [2024-11-20 07:27:33.591251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.426 [2024-11-20 07:27:33.603163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.426 [2024-11-20 07:27:33.603182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.426 [2024-11-20 07:27:33.603189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.426 [2024-11-20 07:27:33.615057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.426 [2024-11-20 07:27:33.615077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.426 [2024-11-20 07:27:33.615084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.426 [2024-11-20 07:27:33.626137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.426 [2024-11-20 07:27:33.626156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.426 [2024-11-20 07:27:33.626168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.426 [2024-11-20 07:27:33.637337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.426 [2024-11-20 07:27:33.637355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.426 [2024-11-20 07:27:33.637362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.426 [2024-11-20 07:27:33.650569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.426 [2024-11-20 07:27:33.650589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.426 [2024-11-20 07:27:33.650596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.426 [2024-11-20 07:27:33.662102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.426 [2024-11-20 07:27:33.662121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.426 [2024-11-20 07:27:33.662127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.426 [2024-11-20 07:27:33.671245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.426 [2024-11-20 07:27:33.671263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.426 [2024-11-20 07:27:33.671270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.426 [2024-11-20 07:27:33.679922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.426 [2024-11-20 07:27:33.679941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.426 [2024-11-20 07:27:33.679948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.426 [2024-11-20 07:27:33.690918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.426 [2024-11-20 07:27:33.690937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.426 [2024-11-20 07:27:33.690944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.687 [2024-11-20 07:27:33.702676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.702695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.702706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.714038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.714057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.714063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.722507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.722525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.722532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.730550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.730568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.730575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.740113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.740131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.740138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.751610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.751630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.751637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.763215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.763234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.763241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.773793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.773812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.773819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.784190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.784209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.784216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.792368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.792390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.792396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.796963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.796981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.796988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.801391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.801410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.801416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.805954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.805973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.805979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.810307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.810326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.810332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.814641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.814659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.814666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.820199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.820217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.820224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.831500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.831518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.831525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.842960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.842979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.842989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.854515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.854534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.854541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.866498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.866517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.866523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.877447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.877465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.877472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.888097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.888116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.888122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.899825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.899844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.688 [2024-11-20 07:27:33.899850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.688 [2024-11-20 07:27:33.910009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.688 [2024-11-20 07:27:33.910027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.689 [2024-11-20 07:27:33.910033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.689 [2024-11-20 07:27:33.920206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.689 [2024-11-20 07:27:33.920224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.689 [2024-11-20 07:27:33.920230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.689 [2024-11-20 07:27:33.932041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.689 [2024-11-20 07:27:33.932060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.689 [2024-11-20 07:27:33.932066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.689 [2024-11-20 07:27:33.943759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.689 [2024-11-20 07:27:33.943781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.689 [2024-11-20 07:27:33.943788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.689 [2024-11-20 07:27:33.953848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.689 [2024-11-20 07:27:33.953867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.689 [2024-11-20 07:27:33.953874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.951 [2024-11-20 07:27:33.965383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.951 [2024-11-20 07:27:33.965401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.951 [2024-11-20 07:27:33.965408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.951 [2024-11-20 07:27:33.974945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.951 [2024-11-20 07:27:33.974964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.951 [2024-11-20 07:27:33.974971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.951 [2024-11-20 07:27:33.985314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:33.985332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:33.985338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:33.995742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:33.995760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:33.995767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.006117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.006135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.006141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.017453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.017471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.017478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.028458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.028476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.028483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.039317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.039335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.039341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.050502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.050519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.050526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.061675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.061693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.061699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.072518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.072537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.072544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.081114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.081133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.081140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.093279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.093297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.093305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.101549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.101567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.101573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.109413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.109431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.109438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.113879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.113897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.113907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.122474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.122492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.122499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.128670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.128689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.128695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.139016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.139034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.139041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.150109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.150127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.150134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.161307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.161326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.161332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.173190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.173209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.173216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.185722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.185741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.185747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.197720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.197739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.197746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.210047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.210070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.952 [2024-11-20 07:27:34.210076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:11.952 [2024-11-20 07:27:34.220510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:11.952 [2024-11-20 07:27:34.220529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.953 [2024-11-20 07:27:34.220536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.215 [2024-11-20 07:27:34.230076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.215 [2024-11-20 07:27:34.230095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.215 [2024-11-20 07:27:34.230101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.215 [2024-11-20 07:27:34.239468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.215 [2024-11-20 07:27:34.239487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.215 [2024-11-20 07:27:34.239494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.215 [2024-11-20 07:27:34.249946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.215 [2024-11-20 07:27:34.249964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.215 [2024-11-20 07:27:34.249971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.215 [2024-11-20 07:27:34.259333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.215 [2024-11-20 07:27:34.259351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.215 [2024-11-20 07:27:34.259358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.215 [2024-11-20 07:27:34.269662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.215 [2024-11-20 07:27:34.269681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.215 [2024-11-20 07:27:34.269689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.215 [2024-11-20 07:27:34.279719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.215 [2024-11-20 07:27:34.279738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.215 [2024-11-20 07:27:34.279744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.215 [2024-11-20 07:27:34.291731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.215 [2024-11-20 07:27:34.291750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.215 [2024-11-20 07:27:34.291757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.215 [2024-11-20 07:27:34.302177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.215 [2024-11-20 07:27:34.302195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.215 [2024-11-20 07:27:34.302202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.215 [2024-11-20 07:27:34.313883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.215 [2024-11-20 07:27:34.313902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.215 [2024-11-20 07:27:34.313908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.215 [2024-11-20 07:27:34.322897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.215 [2024-11-20 07:27:34.322916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.215 [2024-11-20 07:27:34.322923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.215 [2024-11-20 07:27:34.334814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.215 [2024-11-20 07:27:34.334833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.215 [2024-11-20 07:27:34.334839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.215 [2024-11-20 07:27:34.345891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.215 [2024-11-20 07:27:34.345909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.215 [2024-11-20 07:27:34.345916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.215 [2024-11-20 07:27:34.357198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.215 [2024-11-20 07:27:34.357217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.216 [2024-11-20 07:27:34.357223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.216 [2024-11-20 07:27:34.367733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.216 [2024-11-20 07:27:34.367750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.216 [2024-11-20 07:27:34.367757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.216 [2024-11-20 07:27:34.377914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.216 [2024-11-20 07:27:34.377932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.216 [2024-11-20 07:27:34.377939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.216 [2024-11-20 07:27:34.388728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.216 [2024-11-20 07:27:34.388747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.216 [2024-11-20 07:27:34.388757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.216 [2024-11-20 07:27:34.401673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.216 [2024-11-20 07:27:34.401693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.216 [2024-11-20 07:27:34.401699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.216 [2024-11-20 07:27:34.414390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.216 [2024-11-20 07:27:34.414409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.216 [2024-11-20 07:27:34.414415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.216 [2024-11-20 07:27:34.424690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.216 [2024-11-20 07:27:34.424709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.216 [2024-11-20 07:27:34.424715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.216 [2024-11-20 07:27:34.436251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.216 [2024-11-20 07:27:34.436270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.216 [2024-11-20 07:27:34.436276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.216 [2024-11-20 07:27:34.447672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.216 [2024-11-20 07:27:34.447691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.216 [2024-11-20 07:27:34.447698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.216 [2024-11-20 07:27:34.459415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.216 [2024-11-20 07:27:34.459433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.216 [2024-11-20 07:27:34.459440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.216 [2024-11-20 07:27:34.471362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.216 [2024-11-20 07:27:34.471381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.216 [2024-11-20 07:27:34.471387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.216 [2024-11-20 07:27:34.482523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.216 [2024-11-20 07:27:34.482541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.216 [2024-11-20 07:27:34.482548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.492267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.492286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.492293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.502239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.502258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.502264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.512562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.512581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.512587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.524543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.524562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.524569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.534842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.534861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.534868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.545881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.545900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.545906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.557198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.557216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.557223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.479 3010.00 IOPS, 376.25 MiB/s [2024-11-20T06:27:34.757Z] [2024-11-20 07:27:34.567975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.567994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.568001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.578420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.578439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.578448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.588443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.588462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.588469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.595350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.595370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.595376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.603857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.603876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.603882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.610805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.610824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.610830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.618793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.618811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.618818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.626559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.626578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.479 [2024-11-20 07:27:34.626584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.479 [2024-11-20 07:27:34.636288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.479 [2024-11-20 07:27:34.636308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.480 [2024-11-20 07:27:34.636314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.480 [2024-11-20 07:27:34.644784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.480 [2024-11-20 07:27:34.644804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.480 [2024-11-20 07:27:34.644810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.480 [2024-11-20 07:27:34.655443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.480 [2024-11-20 07:27:34.655466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.480 [2024-11-20 07:27:34.655472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.480 [2024-11-20 07:27:34.665737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.480 [2024-11-20 07:27:34.665757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.480 [2024-11-20 07:27:34.665763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.480 [2024-11-20 07:27:34.676325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.480 [2024-11-20 07:27:34.676344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.480 [2024-11-20 07:27:34.676350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.480 [2024-11-20 07:27:34.687779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.480 [2024-11-20 07:27:34.687799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.480 [2024-11-20 07:27:34.687806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.480 [2024-11-20 07:27:34.699166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.480 [2024-11-20 07:27:34.699185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.480 [2024-11-20 07:27:34.699191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.480 [2024-11-20 07:27:34.710330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.480 [2024-11-20 07:27:34.710350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.480 [2024-11-20 07:27:34.710356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.480 [2024-11-20 07:27:34.721181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.480 [2024-11-20 07:27:34.721199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.480 [2024-11-20 07:27:34.721206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.480 [2024-11-20 07:27:34.732511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.480 [2024-11-20 07:27:34.732530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.480 [2024-11-20 07:27:34.732537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.480 [2024-11-20 07:27:34.740582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.480 [2024-11-20 07:27:34.740601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.480 [2024-11-20 07:27:34.740608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.480 [2024-11-20 07:27:34.750755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.480 [2024-11-20 07:27:34.750774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.480 [2024-11-20 07:27:34.750781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.742 [2024-11-20 07:27:34.760462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.742 [2024-11-20 07:27:34.760481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.742 [2024-11-20 07:27:34.760488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.742 [2024-11-20 07:27:34.768469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.742 [2024-11-20 07:27:34.768488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.742 [2024-11-20 07:27:34.768494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.742 [2024-11-20 07:27:34.778673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.742 [2024-11-20 07:27:34.778692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.742 [2024-11-20 07:27:34.778699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.742 [2024-11-20 07:27:34.790221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.742 [2024-11-20 07:27:34.790240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.742 [2024-11-20 07:27:34.790246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.742 [2024-11-20 07:27:34.801398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.742 [2024-11-20 07:27:34.801417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.742 [2024-11-20 07:27:34.801424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.742 [2024-11-20 07:27:34.812077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.812096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.812102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.823855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.823874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.823881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.835727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.835746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.835756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.846052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.846071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.846077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.858213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.858233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.858240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.869584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.869602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.869609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.880874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.880892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.880899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.891239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.891257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.891264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.899066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.899085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.899091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.905453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.905473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.905480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.917239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.917258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.917265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.928466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.928485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.928492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.939951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.939970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.939976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.948242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.948261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.948267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.958065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.958084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.958090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.968130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.968149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.968155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.976312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.976330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.976338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.984238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.984257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.984264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:34.993187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:34.993206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:34.993212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:35.001330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:35.001349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:35.001359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:12.743 [2024-11-20 07:27:35.009424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:12.743 [2024-11-20 07:27:35.009442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.743 [2024-11-20 07:27:35.009449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.005 [2024-11-20 07:27:35.020285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.005 [2024-11-20 07:27:35.020304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.005 [2024-11-20 07:27:35.020310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.005 [2024-11-20 07:27:35.031902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.005 [2024-11-20 07:27:35.031921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.005 [2024-11-20 07:27:35.031928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.005 [2024-11-20 07:27:35.043021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.005 [2024-11-20 07:27:35.043040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.005 [2024-11-20 07:27:35.043046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.005 [2024-11-20 07:27:35.053013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.005 [2024-11-20 07:27:35.053032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.005 [2024-11-20 07:27:35.053039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.005 [2024-11-20 07:27:35.058993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.005 [2024-11-20 07:27:35.059013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.005 [2024-11-20 07:27:35.059019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.005 [2024-11-20 07:27:35.067825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.005 [2024-11-20 07:27:35.067844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.005 [2024-11-20 07:27:35.067851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.005 [2024-11-20 07:27:35.076974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.005 [2024-11-20 07:27:35.076992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.005 [2024-11-20 07:27:35.076999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.005 [2024-11-20 07:27:35.083896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.005 [2024-11-20 07:27:35.083918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.005 [2024-11-20 07:27:35.083925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.005 [2024-11-20 07:27:35.089848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.005 [2024-11-20 07:27:35.089866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.005 [2024-11-20 07:27:35.089873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.005 [2024-11-20 07:27:35.101193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.005 [2024-11-20 07:27:35.101212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.005 [2024-11-20 07:27:35.101219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.005 [2024-11-20 07:27:35.107390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.005 [2024-11-20 07:27:35.107408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.005 [2024-11-20 07:27:35.107415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.005 [2024-11-20 07:27:35.114587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.005 [2024-11-20 07:27:35.114606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.005 [2024-11-20 07:27:35.114613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.005 [2024-11-20 07:27:35.119232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.005 [2024-11-20 07:27:35.119251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.119258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.124640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.124658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.124665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.129093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.129113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.129119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.139855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.139873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.139880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.145886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.145905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.145911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.150279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.150297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.150304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.162015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.162033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.162040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.171541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.171559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.171565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.184000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.184017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.184024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.190019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.190038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.190045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.200327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.200346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.200352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.209981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.210000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.210006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.222640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.222659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.222669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.234127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.234146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.234152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.242678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.242697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.242704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.252237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.252256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.252262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.262716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.262735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.262742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.270968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.270986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.270993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.006 [2024-11-20 07:27:35.277659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.006 [2024-11-20 07:27:35.277678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.006 [2024-11-20 07:27:35.277684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.268 [2024-11-20 07:27:35.284237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.268 [2024-11-20 07:27:35.284256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.268 [2024-11-20 07:27:35.284263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.268 [2024-11-20 07:27:35.291417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.268 [2024-11-20 07:27:35.291435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.268 [2024-11-20 07:27:35.291442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.268 [2024-11-20 07:27:35.297321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.268 [2024-11-20 07:27:35.297340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.268 [2024-11-20 07:27:35.297346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.268 [2024-11-20 07:27:35.304342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.268 [2024-11-20 07:27:35.304361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.268 [2024-11-20 07:27:35.304367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.268 [2024-11-20 07:27:35.313533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.268 [2024-11-20 07:27:35.313552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.268 [2024-11-20 07:27:35.313558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.268 [2024-11-20 07:27:35.318091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.268 [2024-11-20 07:27:35.318110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.268 [2024-11-20 07:27:35.318116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.268 [2024-11-20 07:27:35.324210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.268 [2024-11-20 07:27:35.324229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.268 [2024-11-20 07:27:35.324235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.268 [2024-11-20 07:27:35.335193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.268 [2024-11-20 07:27:35.335212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.268 [2024-11-20 07:27:35.335218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.268 [2024-11-20 07:27:35.343862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.268 [2024-11-20 07:27:35.343880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.268 [2024-11-20 07:27:35.343886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.268 [2024-11-20 07:27:35.351331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.268 [2024-11-20 07:27:35.351350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.268 [2024-11-20 07:27:35.351357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.268 [2024-11-20 07:27:35.357842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.268 [2024-11-20 07:27:35.357861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.268 [2024-11-20 07:27:35.357871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.268 [2024-11-20 07:27:35.367008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.268 [2024-11-20 07:27:35.367026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.268 [2024-11-20 07:27:35.367033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.375587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.375606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.375612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.380467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.380486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.380492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.387290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.387309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.387315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.397253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.397271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.397278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.406702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.406721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.406727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.416074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.416092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.416099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.425487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.425506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.425512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.432104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.432125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.432132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.439118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.439137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.439143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.445054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.445072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.445079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.452444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.452462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.452469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.460122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.460140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.460147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.466217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.466235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.466241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.470235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.470254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.470260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.478872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.478890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.478897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.483281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.483300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.483307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.488137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.488156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.488167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.498147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.498171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.498178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.504840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.504857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.504864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.511021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.511039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.511046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.520119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.520138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.520145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.532140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.532162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.532169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.269 [2024-11-20 07:27:35.540284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.269 [2024-11-20 07:27:35.540303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.269 [2024-11-20 07:27:35.540309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:13.531 [2024-11-20 07:27:35.545174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.531 [2024-11-20 07:27:35.545193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.531 [2024-11-20 07:27:35.545199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:13.531 [2024-11-20 07:27:35.554903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.531 [2024-11-20 07:27:35.554921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.531 [2024-11-20 07:27:35.554931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.531 [2024-11-20 07:27:35.562354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1588870) 00:29:13.531 [2024-11-20 07:27:35.562373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.531 [2024-11-20 07:27:35.562379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:13.531 3281.50 IOPS, 410.19 MiB/s 00:29:13.531 Latency(us) 00:29:13.531 [2024-11-20T06:27:35.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.531 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:13.531 nvme0n1 : 2.00 3286.22 410.78 0.00 0.00 4865.83 815.79 13380.27 00:29:13.531 [2024-11-20T06:27:35.809Z] =================================================================================================================== 00:29:13.531 [2024-11-20T06:27:35.809Z] Total : 3286.22 410.78 0.00 0.00 4865.83 815.79 13380.27 00:29:13.531 { 00:29:13.531 "results": [ 00:29:13.531 { 00:29:13.531 "job": "nvme0n1", 00:29:13.531 "core_mask": "0x2", 00:29:13.531 "workload": "randread", 00:29:13.531 "status": "finished", 00:29:13.531 "queue_depth": 16, 00:29:13.531 "io_size": 131072, 00:29:13.531 "runtime": 2.001997, 00:29:13.531 "iops": 3286.2187106174483, 00:29:13.531 "mibps": 410.77733882718104, 00:29:13.531 "io_failed": 0, 00:29:13.531 "io_timeout": 0, 00:29:13.531 "avg_latency_us": 4865.830665248011, 00:29:13.531 "min_latency_us": 815.7866666666666, 00:29:13.531 "max_latency_us": 13380.266666666666 00:29:13.531 } 00:29:13.531 ], 00:29:13.531 "core_count": 1 00:29:13.531 } 00:29:13.531 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:13.531 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:13.531 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:13.531 | .driver_specific 00:29:13.531 | .nvme_error 00:29:13.531 | .status_code 00:29:13.531 | .command_transient_transport_error' 00:29:13.531 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:13.531 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 212 > 0 )) 00:29:13.531 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3695159 00:29:13.531 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3695159 ']' 00:29:13.531 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3695159 00:29:13.531 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:13.531 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:13.531 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3695159 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3695159' 00:29:13.792 killing process with pid 3695159 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3695159 00:29:13.792 Received shutdown signal, test time was about 2.000000 seconds 00:29:13.792 00:29:13.792 Latency(us) 00:29:13.792 [2024-11-20T06:27:36.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.792 [2024-11-20T06:27:36.070Z] =================================================================================================================== 00:29:13.792 [2024-11-20T06:27:36.070Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3695159 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3695847 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3695847 /var/tmp/bperf.sock 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3695847 ']' 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:13.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:13.792 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.792 [2024-11-20 07:27:35.988960] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:29:13.792 [2024-11-20 07:27:35.989016] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3695847 ] 00:29:14.054 [2024-11-20 07:27:36.074057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.054 [2024-11-20 07:27:36.103538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.626 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:14.626 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:14.626 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:14.626 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:14.887 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:14.887 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.887 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.887 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.887 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.887 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.147 nvme0n1 00:29:15.147 07:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:15.147 07:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.147 07:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.147 07:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.147 07:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:15.147 07:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:15.147 Running I/O for 2 seconds... 00:29:15.147 [2024-11-20 07:27:37.416418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166f0788 00:29:15.147 [2024-11-20 07:27:37.417529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-20 07:27:37.417556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.407 [2024-11-20 07:27:37.424994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166f0788 00:29:15.407 [2024-11-20 07:27:37.426079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.407 [2024-11-20 07:27:37.426100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.407 [2024-11-20 07:27:37.433557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166f0788 00:29:15.407 [2024-11-20 07:27:37.434635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.407 [2024-11-20 07:27:37.434653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.407 [2024-11-20 07:27:37.442076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166f0788 00:29:15.407 [2024-11-20 07:27:37.443152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.407 [2024-11-20 07:27:37.443171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.407 [2024-11-20 07:27:37.449953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166df550 00:29:15.407 [2024-11-20 07:27:37.451220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.407 [2024-11-20 07:27:37.451237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:15.407 [2024-11-20 07:27:37.457706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166f8e88 00:29:15.407 [2024-11-20 07:27:37.458427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.407 [2024-11-20 07:27:37.458444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:15.407 [2024-11-20 07:27:37.466368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166f7da8 00:29:15.407 [2024-11-20 07:27:37.467096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.407 [2024-11-20 07:27:37.467116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:15.407 [2024-11-20 07:27:37.475109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e7c50 00:29:15.407 [2024-11-20 07:27:37.475574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.407 [2024-11-20 07:27:37.475591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:15.407 [2024-11-20 07:27:37.484894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e95a0 00:29:15.407 [2024-11-20 07:27:37.486067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.407 [2024-11-20 07:27:37.486083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.407 [2024-11-20 07:27:37.491969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166eaef0 00:29:15.407 [2024-11-20 07:27:37.492714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.407 [2024-11-20 07:27:37.492730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.500350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e5220 00:29:15.408 [2024-11-20 07:27:37.501077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.501093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.509017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166fcdd0 00:29:15.408 [2024-11-20 07:27:37.509492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.509508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.518764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e23b8 00:29:15.408 [2024-11-20 07:27:37.520031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.520047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.527016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166f4298 00:29:15.408 [2024-11-20 07:27:37.528108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.528124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.535058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.535341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.535358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.543802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.544047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.544063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.552569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.552808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.552823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.561337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.561578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.561595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.570049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.570332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.570349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.578788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.579038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.579054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.587485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.587755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.587771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.596227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.596499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.596516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.605009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.605250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.605266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.613744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.614020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.614037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.622500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.622643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.622659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.631216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.631465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.631481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.639950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.640218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.640233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.648849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.649091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.649106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.657567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.657844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.657859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.666292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.666531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.666547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.408 [2024-11-20 07:27:37.675034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.408 [2024-11-20 07:27:37.675299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.408 [2024-11-20 07:27:37.675315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.683797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.684091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.684108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.692654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.692803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.692821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.701395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.701686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.701703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.710112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.710352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.710368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.718860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.719148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.719169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.727583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.727863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.727880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.736284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.736578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.736595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.744989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.745260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.745278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.753710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.753980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.753996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.762516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.762789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.762806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.771228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.771496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.771514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.779941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.780212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.780229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.788663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.788936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.788952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.797459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.797731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.797748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.806203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.806458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.806474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.814945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.815226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.815242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.823687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.823951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.823968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.832407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.832675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.669 [2024-11-20 07:27:37.832693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.669 [2024-11-20 07:27:37.841123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.669 [2024-11-20 07:27:37.841374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.670 [2024-11-20 07:27:37.841391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.670 [2024-11-20 07:27:37.849832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.670 [2024-11-20 07:27:37.850085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.670 [2024-11-20 07:27:37.850102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.670 [2024-11-20 07:27:37.858588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.670 [2024-11-20 07:27:37.858859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.670 [2024-11-20 07:27:37.858875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.670 [2024-11-20 07:27:37.867304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.670 [2024-11-20 07:27:37.867577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.670 [2024-11-20 07:27:37.867593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.670 [2024-11-20 07:27:37.876035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.670 [2024-11-20 07:27:37.876306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.670 [2024-11-20 07:27:37.876323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.670 [2024-11-20 07:27:37.884753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.670 [2024-11-20 07:27:37.885013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.670 [2024-11-20 07:27:37.885030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.670 [2024-11-20 07:27:37.893544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.670 [2024-11-20 07:27:37.893812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.670 [2024-11-20 07:27:37.893829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.670 [2024-11-20 07:27:37.902309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.670 [2024-11-20 07:27:37.902585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.670 [2024-11-20 07:27:37.902601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.670 [2024-11-20 07:27:37.911067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.670 [2024-11-20 07:27:37.911331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.670 [2024-11-20 07:27:37.911347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.670 [2024-11-20 07:27:37.919829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.670 [2024-11-20 07:27:37.920071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.670 [2024-11-20 07:27:37.920089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.670 [2024-11-20 07:27:37.928587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.670 [2024-11-20 07:27:37.928850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.670 [2024-11-20 07:27:37.928866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.670 [2024-11-20 07:27:37.937317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.670 [2024-11-20 07:27:37.937578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.670 [2024-11-20 07:27:37.937595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.931 [2024-11-20 07:27:37.946111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.931 [2024-11-20 07:27:37.946396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.931 [2024-11-20 07:27:37.946413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.931 [2024-11-20 07:27:37.954866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.931 [2024-11-20 07:27:37.955148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.931 [2024-11-20 07:27:37.955171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.931 [2024-11-20 07:27:37.963636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.931 [2024-11-20 07:27:37.963863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.931 [2024-11-20 07:27:37.963878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.931 [2024-11-20 07:27:37.972319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.931 [2024-11-20 07:27:37.972566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.931 [2024-11-20 07:27:37.972582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.931 [2024-11-20 07:27:37.981062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.931 [2024-11-20 07:27:37.981336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.931 [2024-11-20 07:27:37.981351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.931 [2024-11-20 07:27:37.989772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.931 [2024-11-20 07:27:37.990042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.931 [2024-11-20 07:27:37.990059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.931 [2024-11-20 07:27:37.998459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.931 [2024-11-20 07:27:37.998731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.931 [2024-11-20 07:27:37.998748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.931 [2024-11-20 07:27:38.007244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.931 [2024-11-20 07:27:38.007479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.931 [2024-11-20 07:27:38.007495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.931 [2024-11-20 07:27:38.015981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.016269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.016286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.024708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.024954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.024971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.033465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.033727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.033745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.042194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.042473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.042489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.050952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.051188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.051203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.059695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.059962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.059978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.068394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.068655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.068673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.077119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.077389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.077405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.085848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.086108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.086125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.094588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.094841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.094857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.103348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.103612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.103628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.112175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.112418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.112436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.120908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.121147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.121167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.129629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.129907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.129923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.138422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.138689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.138706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.147187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.147449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.147466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.155950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.156215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.156232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.164705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.164955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.164972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.173457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.173746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.173763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.182165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.182444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.182459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.190945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.191230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.191248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.932 [2024-11-20 07:27:38.199698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:15.932 [2024-11-20 07:27:38.199963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.932 [2024-11-20 07:27:38.199980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.199 [2024-11-20 07:27:38.208461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.199 [2024-11-20 07:27:38.208696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.199 [2024-11-20 07:27:38.208711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.199 [2024-11-20 07:27:38.217224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.199 [2024-11-20 07:27:38.217483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.199 [2024-11-20 07:27:38.217498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.199 [2024-11-20 07:27:38.225992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.199 [2024-11-20 07:27:38.226254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.199 [2024-11-20 07:27:38.226269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.199 [2024-11-20 07:27:38.234763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.199 [2024-11-20 07:27:38.235023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.199 [2024-11-20 07:27:38.235040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.199 [2024-11-20 07:27:38.243528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.199 [2024-11-20 07:27:38.243671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.199 [2024-11-20 07:27:38.243687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.199 [2024-11-20 07:27:38.252294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.199 [2024-11-20 07:27:38.252545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.199 [2024-11-20 07:27:38.252562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.199 [2024-11-20 07:27:38.261045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.199 [2024-11-20 07:27:38.261298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.199 [2024-11-20 07:27:38.261313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.199 [2024-11-20 07:27:38.269720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.199 [2024-11-20 07:27:38.269988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.199 [2024-11-20 07:27:38.270005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.199 [2024-11-20 07:27:38.278534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.278762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.278778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.287304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.287571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.287587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.296040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.296311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.296331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.304766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.305008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.305024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.313494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.313747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.313764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.322225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.322499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.322515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.330968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.331235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.331250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.339673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.339933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.339950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.348377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.348534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.348549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.357113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.357416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.357433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.365909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.366192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.366209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.374694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.374938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.374954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.383406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.383695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.383712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.392185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.392509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.392525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.400897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.401169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.401186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 29125.00 IOPS, 113.77 MiB/s [2024-11-20T06:27:38.478Z] [2024-11-20 07:27:38.409658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.409932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.409949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.418395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.418671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.418686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.427196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.427449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.427464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.435962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.436245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.436261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.444722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.444998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.445014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.453441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.453708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.453724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.200 [2024-11-20 07:27:38.462155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.200 [2024-11-20 07:27:38.462398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.200 [2024-11-20 07:27:38.462414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.510 [2024-11-20 07:27:38.470902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.510 [2024-11-20 07:27:38.471187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.510 [2024-11-20 07:27:38.471203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.510 [2024-11-20 07:27:38.479632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.510 [2024-11-20 07:27:38.479908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.510 [2024-11-20 07:27:38.479925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.510 [2024-11-20 07:27:38.488402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.510 [2024-11-20 07:27:38.488665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.510 [2024-11-20 07:27:38.488681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.510 [2024-11-20 07:27:38.497155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.510 [2024-11-20 07:27:38.497432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.510 [2024-11-20 07:27:38.497449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.510 [2024-11-20 07:27:38.505887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.506162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.506179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.514599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.514890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.514906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.523398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.523641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.523660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.532092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.532357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.532374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.540837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.541090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.541107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.549555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.549834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.549850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.558345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.558589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.558606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.567056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.567311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.567326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.575807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.576068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.576085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.584522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.584779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.584796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.593257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.593506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.593523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.601954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.602248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.602265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.610712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.610945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.610961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.619461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.619726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.619743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.628214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.628452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.628468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.636962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.637233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.637250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.645792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.646066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.646082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.654606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.654880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.654896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.663352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.663613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.663630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.672077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.672359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.672375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.680836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.681076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.681092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.689573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.689824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.689841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.698313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.698597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.698613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.707108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.707380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.707396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.715822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.716090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.716106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.724569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.724835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.724851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.511 [2024-11-20 07:27:38.733284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.511 [2024-11-20 07:27:38.733523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.511 [2024-11-20 07:27:38.733540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.512 [2024-11-20 07:27:38.742030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.512 [2024-11-20 07:27:38.742303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.512 [2024-11-20 07:27:38.742320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.512 [2024-11-20 07:27:38.750805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.512 [2024-11-20 07:27:38.751068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.512 [2024-11-20 07:27:38.751086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.512 [2024-11-20 07:27:38.759595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.512 [2024-11-20 07:27:38.759845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.512 [2024-11-20 07:27:38.759862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.512 [2024-11-20 07:27:38.768326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.512 [2024-11-20 07:27:38.768568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.512 [2024-11-20 07:27:38.768584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.512 [2024-11-20 07:27:38.777086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.512 [2024-11-20 07:27:38.777398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.512 [2024-11-20 07:27:38.777414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.810 [2024-11-20 07:27:38.785866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.810 [2024-11-20 07:27:38.786164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.810 [2024-11-20 07:27:38.786181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.810 [2024-11-20 07:27:38.794628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.810 [2024-11-20 07:27:38.794899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.810 [2024-11-20 07:27:38.794916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.810 [2024-11-20 07:27:38.803370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.810 [2024-11-20 07:27:38.803681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.810 [2024-11-20 07:27:38.803697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.810 [2024-11-20 07:27:38.812096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.810 [2024-11-20 07:27:38.812385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.810 [2024-11-20 07:27:38.812401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.810 [2024-11-20 07:27:38.820874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.810 [2024-11-20 07:27:38.821016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.810 [2024-11-20 07:27:38.821032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.810 [2024-11-20 07:27:38.829551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.810 [2024-11-20 07:27:38.829828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.810 [2024-11-20 07:27:38.829843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.810 [2024-11-20 07:27:38.838331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.810 [2024-11-20 07:27:38.838618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.810 [2024-11-20 07:27:38.838634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.810 [2024-11-20 07:27:38.847071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.810 [2024-11-20 07:27:38.847360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.810 [2024-11-20 07:27:38.847377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.810 [2024-11-20 07:27:38.855780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.810 [2024-11-20 07:27:38.856015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.810 [2024-11-20 07:27:38.856031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.810 [2024-11-20 07:27:38.864476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.810 [2024-11-20 07:27:38.864726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.810 [2024-11-20 07:27:38.864742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.810 [2024-11-20 07:27:38.873206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.810 [2024-11-20 07:27:38.873445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.810 [2024-11-20 07:27:38.873462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.810 [2024-11-20 07:27:38.881918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.810 [2024-11-20 07:27:38.882170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:38.882186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:38.890669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:38.890946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:38.890962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:38.899436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:38.899727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:38.899743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:38.908131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:38.908403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:38.908420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:38.916873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:38.917145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:38.917165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:38.925609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:38.925871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:38.925887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:38.934389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:38.934620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:38.934636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:38.943105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:38.943384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:38.943401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:38.951878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:38.952121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:38.952137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:38.960554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:38.960817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:38.960834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:38.969326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:38.969559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:38.969574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:38.978051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:38.978322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:38.978341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:38.986764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:38.986996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:38.987011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:38.995492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:38.995740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:38.995757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:39.004262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:39.004495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:39.004510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:39.013011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:39.013150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:39.013169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:39.021719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:39.021994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:39.022010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:39.030468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:39.030728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:39.030744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:39.039264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:39.039535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:39.039551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:39.048017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:39.048292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:39.048309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:39.056768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:39.057046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:39.057061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.811 [2024-11-20 07:27:39.065508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:16.811 [2024-11-20 07:27:39.065776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.811 [2024-11-20 07:27:39.065791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.074257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.074509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.074525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.082992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.083267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.083283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.091729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.091995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.092011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.100561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.100841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.100858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.109316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.109556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.109572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.118110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.118362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.118378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.126876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.127021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.127036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.135604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.135860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.135877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.144369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.144510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.144526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.153046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.153320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.153337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.161764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.162015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.162032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.170541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.170781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.170797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.179310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.179551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.179567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.188026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.188316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.188332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.196745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.197031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.197048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.205469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.205729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.205748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.214234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.214507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.214524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.222965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.223231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.223247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.231680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.231919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.231935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.240408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.240668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.240684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.249136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.249462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.249479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.257904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.258146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.258166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.266615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.266877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.266894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.275348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.275602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.275618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.284105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.284364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.284381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.292862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.293106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.293122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.301570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.301826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.301843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.310409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.310647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.310662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.319136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.319380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.094 [2024-11-20 07:27:39.319397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.094 [2024-11-20 07:27:39.327830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.094 [2024-11-20 07:27:39.328084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.095 [2024-11-20 07:27:39.328100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.095 [2024-11-20 07:27:39.336579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.095 [2024-11-20 07:27:39.336863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.095 [2024-11-20 07:27:39.336879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.095 [2024-11-20 07:27:39.345311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.095 [2024-11-20 07:27:39.345549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.095 [2024-11-20 07:27:39.345565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.095 [2024-11-20 07:27:39.354045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.095 [2024-11-20 07:27:39.354329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.095 [2024-11-20 07:27:39.354348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.095 [2024-11-20 07:27:39.362758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.095 [2024-11-20 07:27:39.363052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.095 [2024-11-20 07:27:39.363069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.356 [2024-11-20 07:27:39.371512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.356 [2024-11-20 07:27:39.371797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.356 [2024-11-20 07:27:39.371814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.356 [2024-11-20 07:27:39.380243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.356 [2024-11-20 07:27:39.380520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.356 [2024-11-20 07:27:39.380537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.356 [2024-11-20 07:27:39.388940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.356 [2024-11-20 07:27:39.389203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.356 [2024-11-20 07:27:39.389220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.356 [2024-11-20 07:27:39.397677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.356 [2024-11-20 07:27:39.397939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.356 [2024-11-20 07:27:39.397955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.356 [2024-11-20 07:27:39.406388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051520) with pdu=0x2000166e3060 00:29:17.356 [2024-11-20 07:27:39.407644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.356 [2024-11-20 07:27:39.407660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.356 29177.50 IOPS, 113.97 MiB/s 00:29:17.356 Latency(us) 00:29:17.356 [2024-11-20T06:27:39.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.356 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:17.356 nvme0n1 : 2.00 29180.50 113.99 0.00 0.00 4379.60 2198.19 12069.55 00:29:17.356 [2024-11-20T06:27:39.634Z] =================================================================================================================== 00:29:17.356 [2024-11-20T06:27:39.634Z] Total : 29180.50 113.99 0.00 0.00 4379.60 2198.19 12069.55 00:29:17.356 { 00:29:17.356 "results": [ 00:29:17.356 { 00:29:17.356 "job": "nvme0n1", 00:29:17.356 "core_mask": "0x2", 00:29:17.356 "workload": "randwrite", 00:29:17.356 "status": "finished", 00:29:17.356 "queue_depth": 128, 00:29:17.356 "io_size": 4096, 00:29:17.356 "runtime": 2.004181, 00:29:17.356 "iops": 29180.498168578586, 00:29:17.356 "mibps": 113.9863209710101, 00:29:17.356 "io_failed": 0, 00:29:17.356 "io_timeout": 0, 00:29:17.356 "avg_latency_us": 4379.597730622574, 00:29:17.356 "min_latency_us": 2198.1866666666665, 00:29:17.356 "max_latency_us": 12069.546666666667 00:29:17.356 } 00:29:17.356 ], 00:29:17.356 "core_count": 1 00:29:17.356 } 00:29:17.356 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:17.356 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:17.356 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:17.356 | .driver_specific 00:29:17.356 | .nvme_error 00:29:17.356 | .status_code 00:29:17.356 | .command_transient_transport_error' 00:29:17.356 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:17.356 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 229 > 0 )) 00:29:17.356 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3695847 00:29:17.356 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3695847 ']' 00:29:17.356 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3695847 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3695847 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3695847' 00:29:17.618 killing process with pid 3695847 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3695847 00:29:17.618 Received shutdown signal, test time was about 2.000000 seconds 00:29:17.618 00:29:17.618 Latency(us) 00:29:17.618 [2024-11-20T06:27:39.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.618 [2024-11-20T06:27:39.896Z] =================================================================================================================== 00:29:17.618 [2024-11-20T06:27:39.896Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3695847 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3696679 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3696679 /var/tmp/bperf.sock 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3696679 ']' 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:17.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:17.618 07:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:17.618 [2024-11-20 07:27:39.848123] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:29:17.618 [2024-11-20 07:27:39.848183] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3696679 ] 00:29:17.618 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:17.618 Zero copy mechanism will not be used. 00:29:17.879 [2024-11-20 07:27:39.932917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.879 [2024-11-20 07:27:39.962506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.451 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:18.451 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:18.451 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:18.451 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:18.711 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:18.711 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.711 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.711 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.711 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.711 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.971 nvme0n1 00:29:18.971 07:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:18.971 07:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.971 07:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.971 07:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.971 07:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:18.971 07:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:18.971 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:18.971 Zero copy mechanism will not be used. 00:29:18.971 Running I/O for 2 seconds... 00:29:18.971 [2024-11-20 07:27:41.236813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:18.971 [2024-11-20 07:27:41.237089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.971 [2024-11-20 07:27:41.237115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.971 [2024-11-20 07:27:41.245338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:18.971 [2024-11-20 07:27:41.245419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.971 [2024-11-20 07:27:41.245437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.234 [2024-11-20 07:27:41.254075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.234 [2024-11-20 07:27:41.254372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.254392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.260615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.260676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.260692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.270541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.270832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.270851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.279028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.279093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.279108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.287059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.287121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.287137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.297086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.297376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.297393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.303893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.303955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.303971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.311645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.311929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.311946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.320081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.320144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.320166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.328153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.328254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.328270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.338285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.338346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.338362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.345217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.345282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.345297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.352888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.353157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.353179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.358914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.359229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.359246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.366185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.366414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.366430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.375571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.375877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.375894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.382317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.382601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.382621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.391078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.391377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.391395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.396602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.396656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.396671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.404049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.404293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.404309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.413195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.413248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.235 [2024-11-20 07:27:41.413263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.235 [2024-11-20 07:27:41.423498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.235 [2024-11-20 07:27:41.423578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.236 [2024-11-20 07:27:41.423594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.236 [2024-11-20 07:27:41.430384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.236 [2024-11-20 07:27:41.430495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.236 [2024-11-20 07:27:41.430511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.236 [2024-11-20 07:27:41.438575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.236 [2024-11-20 07:27:41.438802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.236 [2024-11-20 07:27:41.438818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.236 [2024-11-20 07:27:41.448643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.236 [2024-11-20 07:27:41.448900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.236 [2024-11-20 07:27:41.448918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.236 [2024-11-20 07:27:41.460012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.236 [2024-11-20 07:27:41.460202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.236 [2024-11-20 07:27:41.460219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.236 [2024-11-20 07:27:41.471051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.236 [2024-11-20 07:27:41.471329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.236 [2024-11-20 07:27:41.471345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.236 [2024-11-20 07:27:41.482655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.236 [2024-11-20 07:27:41.482740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.236 [2024-11-20 07:27:41.482755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.236 [2024-11-20 07:27:41.492367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.236 [2024-11-20 07:27:41.492429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.236 [2024-11-20 07:27:41.492445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.236 [2024-11-20 07:27:41.503975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.236 [2024-11-20 07:27:41.504283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.236 [2024-11-20 07:27:41.504300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.498 [2024-11-20 07:27:41.514993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.498 [2024-11-20 07:27:41.515304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.498 [2024-11-20 07:27:41.515321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.498 [2024-11-20 07:27:41.524186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.498 [2024-11-20 07:27:41.524464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.498 [2024-11-20 07:27:41.524480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.498 [2024-11-20 07:27:41.534862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.498 [2024-11-20 07:27:41.534920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.498 [2024-11-20 07:27:41.534935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.498 [2024-11-20 07:27:41.543146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.498 [2024-11-20 07:27:41.543452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.498 [2024-11-20 07:27:41.543469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.498 [2024-11-20 07:27:41.554361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.498 [2024-11-20 07:27:41.554623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.498 [2024-11-20 07:27:41.554640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.498 [2024-11-20 07:27:41.563820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.498 [2024-11-20 07:27:41.564111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.498 [2024-11-20 07:27:41.564128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.498 [2024-11-20 07:27:41.571806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.498 [2024-11-20 07:27:41.571852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.498 [2024-11-20 07:27:41.571868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.498 [2024-11-20 07:27:41.582127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.498 [2024-11-20 07:27:41.582435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.498 [2024-11-20 07:27:41.582452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.498 [2024-11-20 07:27:41.591887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.498 [2024-11-20 07:27:41.592104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.498 [2024-11-20 07:27:41.592121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.498 [2024-11-20 07:27:41.603194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.498 [2024-11-20 07:27:41.603463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.498 [2024-11-20 07:27:41.603479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.498 [2024-11-20 07:27:41.612012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.612063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.612079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.620657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.620719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.620735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.629860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.630171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.630191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.638359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.638758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.638775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.647515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.647806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.647823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.658868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.659082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.659098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.669234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.669524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.669542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.680321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.680605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.680623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.691383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.691634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.691652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.702222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.702370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.702388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.713203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.713463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.713482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.724094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.724402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.724420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.733991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.734190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.734207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.744669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.744980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.744998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.755178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.755454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.755472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.499 [2024-11-20 07:27:41.765281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.499 [2024-11-20 07:27:41.765587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.499 [2024-11-20 07:27:41.765605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.776183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.776398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.776415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.786647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.786869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.786886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.796619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.796937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.796955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.806822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.807088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.807105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.817829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.818132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.818150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.827916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.828136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.828152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.837907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.838194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.838212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.848358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.848635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.848652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.858803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.859059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.859077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.869059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.869319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.869336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.879784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.880027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.880044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.890090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.890424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.890441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.901030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.901303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.901325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.911406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.911676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.911694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.921535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.921852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.921870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.931980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.932375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.932393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.941854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.942169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.942186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.952743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.953023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.953042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.963274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.963587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.963605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.972694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.972942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.761 [2024-11-20 07:27:41.972958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.761 [2024-11-20 07:27:41.983572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.761 [2024-11-20 07:27:41.983888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.762 [2024-11-20 07:27:41.983906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.762 [2024-11-20 07:27:41.993560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.762 [2024-11-20 07:27:41.993829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.762 [2024-11-20 07:27:41.993847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.762 [2024-11-20 07:27:42.004202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.762 [2024-11-20 07:27:42.004363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.762 [2024-11-20 07:27:42.004379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.762 [2024-11-20 07:27:42.014897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.762 [2024-11-20 07:27:42.015248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.762 [2024-11-20 07:27:42.015266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.762 [2024-11-20 07:27:42.024992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:19.762 [2024-11-20 07:27:42.025263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.762 [2024-11-20 07:27:42.025280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.023 [2024-11-20 07:27:42.035542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.023 [2024-11-20 07:27:42.035891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.023 [2024-11-20 07:27:42.035909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.023 [2024-11-20 07:27:42.045694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.023 [2024-11-20 07:27:42.046030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.023 [2024-11-20 07:27:42.046048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.023 [2024-11-20 07:27:42.055746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.023 [2024-11-20 07:27:42.056085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.023 [2024-11-20 07:27:42.056102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.023 [2024-11-20 07:27:42.066301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.023 [2024-11-20 07:27:42.066522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.023 [2024-11-20 07:27:42.066538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.023 [2024-11-20 07:27:42.077596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.023 [2024-11-20 07:27:42.077874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.023 [2024-11-20 07:27:42.077890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.023 [2024-11-20 07:27:42.086998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.023 [2024-11-20 07:27:42.087283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.023 [2024-11-20 07:27:42.087300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.023 [2024-11-20 07:27:42.097595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.023 [2024-11-20 07:27:42.097837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.023 [2024-11-20 07:27:42.097853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.023 [2024-11-20 07:27:42.108209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.108509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.108525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.118251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.118522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.118539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.129100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.129362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.129379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.139083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.139323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.139339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.149810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.150139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.150156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.159919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.160253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.160270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.170314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.170627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.170647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.180632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.180937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.180954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.190735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.191068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.191085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.200841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.201115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.201132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.211433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.211719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.211737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.222195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.222439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.222456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.231945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.232123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.232140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.024 3177.00 IOPS, 397.12 MiB/s [2024-11-20T06:27:42.302Z] [2024-11-20 07:27:42.243096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.243375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.243390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.252978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.253199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.253215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.257922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.257989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.258004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.260931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.260975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.260990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.263888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.263934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.263950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.267750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.267973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.267988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.276004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.276068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.276083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.279702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.279781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.279796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.286515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.286789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.024 [2024-11-20 07:27:42.286804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.024 [2024-11-20 07:27:42.295906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.024 [2024-11-20 07:27:42.296139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.025 [2024-11-20 07:27:42.296154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.306523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.306751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.286 [2024-11-20 07:27:42.306766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.316426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.316725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.286 [2024-11-20 07:27:42.316741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.326332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.326603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.286 [2024-11-20 07:27:42.326619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.336899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.337148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.286 [2024-11-20 07:27:42.337170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.347284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.347473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.286 [2024-11-20 07:27:42.347489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.356748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.356976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.286 [2024-11-20 07:27:42.356992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.367003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.367275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.286 [2024-11-20 07:27:42.367290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.377248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.377539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.286 [2024-11-20 07:27:42.377563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.387673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.387869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.286 [2024-11-20 07:27:42.387884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.398329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.398677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.286 [2024-11-20 07:27:42.398695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.408878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.409101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.286 [2024-11-20 07:27:42.409116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.419800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.420035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.286 [2024-11-20 07:27:42.420052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.430242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.430484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.286 [2024-11-20 07:27:42.430499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.437087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.437137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.286 [2024-11-20 07:27:42.437152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.286 [2024-11-20 07:27:42.443081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.286 [2024-11-20 07:27:42.443138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.443153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.287 [2024-11-20 07:27:42.450455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.287 [2024-11-20 07:27:42.450513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.450528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.287 [2024-11-20 07:27:42.460170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.287 [2024-11-20 07:27:42.460230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.460246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.287 [2024-11-20 07:27:42.468204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.287 [2024-11-20 07:27:42.468486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.468502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.287 [2024-11-20 07:27:42.477819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.287 [2024-11-20 07:27:42.477881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.477896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.287 [2024-11-20 07:27:42.486357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.287 [2024-11-20 07:27:42.486620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.486636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.287 [2024-11-20 07:27:42.495001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.287 [2024-11-20 07:27:42.495061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.495076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.287 [2024-11-20 07:27:42.504025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.287 [2024-11-20 07:27:42.504084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.504099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.287 [2024-11-20 07:27:42.510840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.287 [2024-11-20 07:27:42.511146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.511166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.287 [2024-11-20 07:27:42.519660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.287 [2024-11-20 07:27:42.519729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.519744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.287 [2024-11-20 07:27:42.528973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.287 [2024-11-20 07:27:42.529174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.529189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.287 [2024-11-20 07:27:42.534950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.287 [2024-11-20 07:27:42.535000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.535015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.287 [2024-11-20 07:27:42.541978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.287 [2024-11-20 07:27:42.542023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.542038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.287 [2024-11-20 07:27:42.550517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.287 [2024-11-20 07:27:42.550563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.550578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.287 [2024-11-20 07:27:42.559074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.287 [2024-11-20 07:27:42.559129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.287 [2024-11-20 07:27:42.559145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.565828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.565874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.565890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.574023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.574398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.574414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.584184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.584321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.584336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.594544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.594611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.594626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.601913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.601968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.601983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.606274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.606324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.606339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.610745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.610840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.610858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.618554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.618687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.618702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.629241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.629533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.629548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.637645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.637855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.637870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.646826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.646889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.646905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.653972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.654035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.654050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.663348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.663400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.663415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.668143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.668214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.668229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.673741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.673795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.673810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.682739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.682935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.682950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.693056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.693118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.693134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.700562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.700625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.700640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.709841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.709926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.709941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.718603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.718803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.718818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.728287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.728621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.728637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.736903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.737168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.737183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.745916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.745978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.745993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.755155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.755502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.755518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.764120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.764189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.764204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.773194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.773481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.773497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.782906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.783187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.783203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.793086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.793489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.793504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.804229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.804506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.804522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.549 [2024-11-20 07:27:42.815810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.549 [2024-11-20 07:27:42.816069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.549 [2024-11-20 07:27:42.816084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.827150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.827415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.827431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.839122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.839362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.839378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.850696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.850968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.850990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.862054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.862279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.862294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.873525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.873783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.873798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.885317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.885533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.885548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.897039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.897271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.897286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.907846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.908146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.908167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.918065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.918385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.918401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.929499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.929812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.929827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.937727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.937974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.937989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.945421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.945703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.945719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.956539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.956740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.956756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.967606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.967776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.967791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.978967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.979272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.979288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:42.989713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:42.990009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:42.990025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:43.001082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:43.001352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:43.001367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:43.012117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:43.012372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:43.012387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:43.021566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:43.021621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:43.021636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:43.025959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:43.026002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:43.026017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:43.033274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:43.033320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:43.033335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:43.036721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:43.036767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:43.036783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:43.040665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:43.040707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:43.040722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:43.044470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:43.044523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:43.044539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:43.048265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:43.048346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.811 [2024-11-20 07:27:43.048362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.811 [2024-11-20 07:27:43.051686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.811 [2024-11-20 07:27:43.051737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.812 [2024-11-20 07:27:43.051752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.812 [2024-11-20 07:27:43.055719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.812 [2024-11-20 07:27:43.055779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.812 [2024-11-20 07:27:43.055794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.812 [2024-11-20 07:27:43.059677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.812 [2024-11-20 07:27:43.059723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.812 [2024-11-20 07:27:43.059738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.812 [2024-11-20 07:27:43.063585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.812 [2024-11-20 07:27:43.063637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.812 [2024-11-20 07:27:43.063655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.812 [2024-11-20 07:27:43.067679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.812 [2024-11-20 07:27:43.067743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.812 [2024-11-20 07:27:43.067759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.812 [2024-11-20 07:27:43.071545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.812 [2024-11-20 07:27:43.071610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.812 [2024-11-20 07:27:43.071626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.812 [2024-11-20 07:27:43.074879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.812 [2024-11-20 07:27:43.074930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.812 [2024-11-20 07:27:43.074945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.812 [2024-11-20 07:27:43.078296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.812 [2024-11-20 07:27:43.078360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.812 [2024-11-20 07:27:43.078376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.812 [2024-11-20 07:27:43.081836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:20.812 [2024-11-20 07:27:43.081921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.812 [2024-11-20 07:27:43.081936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:21.074 [2024-11-20 07:27:43.089536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.074 [2024-11-20 07:27:43.089621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.074 [2024-11-20 07:27:43.089636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:21.074 [2024-11-20 07:27:43.093561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.074 [2024-11-20 07:27:43.093634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.074 [2024-11-20 07:27:43.093650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:21.074 [2024-11-20 07:27:43.097451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.074 [2024-11-20 07:27:43.097498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.074 [2024-11-20 07:27:43.097513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:21.074 [2024-11-20 07:27:43.101155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.074 [2024-11-20 07:27:43.101221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.074 [2024-11-20 07:27:43.101236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:21.074 [2024-11-20 07:27:43.106889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.106966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.106981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.112971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.113027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.113042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.121463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.121606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.121621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.125890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.125935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.125951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.131060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.131344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.131368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.139755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.139825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.139840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.146738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.146790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.146805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.150717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.150771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.150786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.154449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.154492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.154507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.158272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.158333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.158348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.162316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.162378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.162393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.166105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.166150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.166171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.169567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.169610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.169625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.174473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.174529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.174545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.178640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.178771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.178786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.184220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.184547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.184563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.191340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.191389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.191406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.194592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.194636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.194651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.198566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.198613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.198629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.201681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.201730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.201745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.204604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.204654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.204669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.208794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.208858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.208873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.215252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.215332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.215347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.219228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.219274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.219289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.222377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.075 [2024-11-20 07:27:43.222456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.075 [2024-11-20 07:27:43.222472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:21.075 [2024-11-20 07:27:43.225594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.076 [2024-11-20 07:27:43.225657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.076 [2024-11-20 07:27:43.225673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:21.076 [2024-11-20 07:27:43.229358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.076 [2024-11-20 07:27:43.229405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.076 [2024-11-20 07:27:43.229420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:21.076 [2024-11-20 07:27:43.232826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.076 [2024-11-20 07:27:43.232936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.076 [2024-11-20 07:27:43.232951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:21.076 [2024-11-20 07:27:43.236536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.076 [2024-11-20 07:27:43.236640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.076 [2024-11-20 07:27:43.236656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:21.076 3700.00 IOPS, 462.50 MiB/s [2024-11-20T06:27:43.354Z] [2024-11-20 07:27:43.242230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1051860) with pdu=0x2000166ff3c8 00:29:21.076 [2024-11-20 07:27:43.242288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.076 [2024-11-20 07:27:43.242304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:21.076 00:29:21.076 Latency(us) 00:29:21.076 [2024-11-20T06:27:43.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.076 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:21.076 nvme0n1 : 2.01 3700.18 462.52 0.00 0.00 4318.15 1372.16 11905.71 00:29:21.076 [2024-11-20T06:27:43.354Z] =================================================================================================================== 00:29:21.076 [2024-11-20T06:27:43.354Z] Total : 3700.18 462.52 0.00 0.00 4318.15 1372.16 11905.71 00:29:21.076 { 00:29:21.076 "results": [ 00:29:21.076 { 00:29:21.076 "job": "nvme0n1", 00:29:21.076 "core_mask": "0x2", 00:29:21.076 "workload": "randwrite", 00:29:21.076 "status": "finished", 00:29:21.076 "queue_depth": 16, 00:29:21.076 "io_size": 131072, 00:29:21.076 "runtime": 2.005036, 00:29:21.076 "iops": 3700.182939358695, 00:29:21.076 "mibps": 462.52286741983687, 00:29:21.076 "io_failed": 0, 00:29:21.076 "io_timeout": 0, 00:29:21.076 "avg_latency_us": 4318.148235611268, 00:29:21.076 "min_latency_us": 1372.16, 00:29:21.076 "max_latency_us": 11905.706666666667 00:29:21.076 } 00:29:21.076 ], 00:29:21.076 "core_count": 1 00:29:21.076 } 00:29:21.076 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:21.076 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:21.076 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:21.076 | .driver_specific 00:29:21.076 | .nvme_error 00:29:21.076 | .status_code 00:29:21.076 | .command_transient_transport_error' 00:29:21.076 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:21.338 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 240 > 0 )) 00:29:21.338 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3696679 00:29:21.338 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3696679 ']' 00:29:21.338 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3696679 00:29:21.338 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:21.338 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:21.338 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3696679 00:29:21.338 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:21.338 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:21.338 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3696679' 00:29:21.338 killing process with pid 3696679 00:29:21.338 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3696679 00:29:21.338 Received shutdown signal, test time was about 2.000000 seconds 00:29:21.338 00:29:21.338 Latency(us) 00:29:21.338 [2024-11-20T06:27:43.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.338 [2024-11-20T06:27:43.616Z] =================================================================================================================== 00:29:21.338 [2024-11-20T06:27:43.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.338 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3696679 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3694129 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3694129 ']' 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3694129 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3694129 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3694129' 00:29:21.600 killing process with pid 3694129 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3694129 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3694129 00:29:21.600 00:29:21.600 real 0m16.759s 00:29:21.600 user 0m33.383s 00:29:21.600 sys 0m3.523s 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:21.600 ************************************ 00:29:21.600 END TEST nvmf_digest_error 00:29:21.600 ************************************ 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.600 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.600 rmmod nvme_tcp 00:29:21.862 rmmod nvme_fabrics 00:29:21.862 rmmod nvme_keyring 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3694129 ']' 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3694129 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 3694129 ']' 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 3694129 00:29:21.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3694129) - No such process 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 3694129 is not found' 00:29:21.862 Process with pid 3694129 is not found 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.862 07:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.776 07:27:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:23.776 00:29:23.776 real 0m43.622s 00:29:23.776 user 1m8.537s 00:29:23.776 sys 0m13.274s 00:29:23.776 07:27:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:23.776 07:27:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:23.776 ************************************ 00:29:23.776 END TEST nvmf_digest 00:29:23.776 ************************************ 00:29:23.776 07:27:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:23.776 07:27:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:23.776 07:27:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:23.776 07:27:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:23.776 07:27:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:23.776 07:27:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:23.776 07:27:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.037 ************************************ 00:29:24.037 START TEST nvmf_bdevperf 00:29:24.037 ************************************ 00:29:24.037 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:24.037 * Looking for test storage... 00:29:24.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:24.037 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:24.037 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:29:24.037 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:24.037 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:24.037 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.037 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:24.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.038 --rc genhtml_branch_coverage=1 00:29:24.038 --rc genhtml_function_coverage=1 00:29:24.038 --rc genhtml_legend=1 00:29:24.038 --rc geninfo_all_blocks=1 00:29:24.038 --rc geninfo_unexecuted_blocks=1 00:29:24.038 00:29:24.038 ' 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:24.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.038 --rc genhtml_branch_coverage=1 00:29:24.038 --rc genhtml_function_coverage=1 00:29:24.038 --rc genhtml_legend=1 00:29:24.038 --rc geninfo_all_blocks=1 00:29:24.038 --rc geninfo_unexecuted_blocks=1 00:29:24.038 00:29:24.038 ' 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:24.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.038 --rc genhtml_branch_coverage=1 00:29:24.038 --rc genhtml_function_coverage=1 00:29:24.038 --rc genhtml_legend=1 00:29:24.038 --rc geninfo_all_blocks=1 00:29:24.038 --rc geninfo_unexecuted_blocks=1 00:29:24.038 00:29:24.038 ' 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:24.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.038 --rc genhtml_branch_coverage=1 00:29:24.038 --rc genhtml_function_coverage=1 00:29:24.038 --rc genhtml_legend=1 00:29:24.038 --rc geninfo_all_blocks=1 00:29:24.038 --rc geninfo_unexecuted_blocks=1 00:29:24.038 00:29:24.038 ' 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.038 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:24.039 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.039 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:24.039 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.039 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.039 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.039 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.039 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.299 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:24.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.300 07:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:32.441 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:32.441 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:32.441 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.441 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:32.442 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:32.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:29:32.442 00:29:32.442 --- 10.0.0.2 ping statistics --- 00:29:32.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.442 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:29:32.442 00:29:32.442 --- 10.0.0.1 ping statistics --- 00:29:32.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.442 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3701569 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3701569 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3701569 ']' 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:32.442 07:27:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.442 [2024-11-20 07:27:53.932964] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:29:32.442 [2024-11-20 07:27:53.933034] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.442 [2024-11-20 07:27:54.033512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:32.442 [2024-11-20 07:27:54.086122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.442 [2024-11-20 07:27:54.086184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.442 [2024-11-20 07:27:54.086193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.442 [2024-11-20 07:27:54.086200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.442 [2024-11-20 07:27:54.086207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.442 [2024-11-20 07:27:54.088271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.442 [2024-11-20 07:27:54.088749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:32.442 [2024-11-20 07:27:54.088752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.704 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:32.704 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:32.704 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:32.704 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:32.704 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.704 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.704 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:32.704 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.704 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.704 [2024-11-20 07:27:54.800232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.704 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.704 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:32.704 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.704 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.704 Malloc0 00:29:32.704 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.705 [2024-11-20 07:27:54.873695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:32.705 { 00:29:32.705 "params": { 00:29:32.705 "name": "Nvme$subsystem", 00:29:32.705 "trtype": "$TEST_TRANSPORT", 00:29:32.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.705 "adrfam": "ipv4", 00:29:32.705 "trsvcid": "$NVMF_PORT", 00:29:32.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.705 "hdgst": ${hdgst:-false}, 00:29:32.705 "ddgst": ${ddgst:-false} 00:29:32.705 }, 00:29:32.705 "method": "bdev_nvme_attach_controller" 00:29:32.705 } 00:29:32.705 EOF 00:29:32.705 )") 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:32.705 07:27:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:32.705 "params": { 00:29:32.705 "name": "Nvme1", 00:29:32.705 "trtype": "tcp", 00:29:32.705 "traddr": "10.0.0.2", 00:29:32.705 "adrfam": "ipv4", 00:29:32.705 "trsvcid": "4420", 00:29:32.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:32.705 "hdgst": false, 00:29:32.705 "ddgst": false 00:29:32.705 }, 00:29:32.705 "method": "bdev_nvme_attach_controller" 00:29:32.705 }' 00:29:32.705 [2024-11-20 07:27:54.940815] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:29:32.705 [2024-11-20 07:27:54.940885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3701911 ] 00:29:32.966 [2024-11-20 07:27:55.034013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.966 [2024-11-20 07:27:55.087259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.227 Running I/O for 1 seconds... 00:29:34.172 8562.00 IOPS, 33.45 MiB/s 00:29:34.172 Latency(us) 00:29:34.172 [2024-11-20T06:27:56.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.172 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:34.172 Verification LBA range: start 0x0 length 0x4000 00:29:34.172 Nvme1n1 : 1.05 8267.08 32.29 0.00 0.00 14953.74 2443.95 48059.73 00:29:34.172 [2024-11-20T06:27:56.450Z] =================================================================================================================== 00:29:34.172 [2024-11-20T06:27:56.450Z] Total : 8267.08 32.29 0.00 0.00 14953.74 2443.95 48059.73 00:29:34.433 07:27:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3702230 00:29:34.433 07:27:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:34.433 07:27:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:34.433 07:27:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:34.433 07:27:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:34.433 07:27:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:34.433 07:27:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.433 07:27:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.433 { 00:29:34.433 "params": { 00:29:34.433 "name": "Nvme$subsystem", 00:29:34.433 "trtype": "$TEST_TRANSPORT", 00:29:34.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.433 "adrfam": "ipv4", 00:29:34.433 "trsvcid": "$NVMF_PORT", 00:29:34.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.433 "hdgst": ${hdgst:-false}, 00:29:34.433 "ddgst": ${ddgst:-false} 00:29:34.433 }, 00:29:34.433 "method": "bdev_nvme_attach_controller" 00:29:34.433 } 00:29:34.433 EOF 00:29:34.433 )") 00:29:34.433 07:27:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:34.433 07:27:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:34.433 07:27:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:34.433 07:27:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:34.433 "params": { 00:29:34.433 "name": "Nvme1", 00:29:34.433 "trtype": "tcp", 00:29:34.433 "traddr": "10.0.0.2", 00:29:34.433 "adrfam": "ipv4", 00:29:34.433 "trsvcid": "4420", 00:29:34.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.433 "hdgst": false, 00:29:34.433 "ddgst": false 00:29:34.433 }, 00:29:34.433 "method": "bdev_nvme_attach_controller" 00:29:34.433 }' 00:29:34.433 [2024-11-20 07:27:56.508626] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:29:34.434 [2024-11-20 07:27:56.508703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702230 ] 00:29:34.434 [2024-11-20 07:27:56.601265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.434 [2024-11-20 07:27:56.647070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.694 Running I/O for 15 seconds... 00:29:37.030 11434.00 IOPS, 44.66 MiB/s [2024-11-20T06:27:59.573Z] 11320.00 IOPS, 44.22 MiB/s [2024-11-20T06:27:59.573Z] 07:27:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3701569 00:29:37.295 07:27:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:37.295 [2024-11-20 07:27:59.472126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.295 [2024-11-20 07:27:59.472261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.295 [2024-11-20 07:27:59.472286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.295 [2024-11-20 07:27:59.472296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.295 [2024-11-20 07:27:59.472307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.295 [2024-11-20 07:27:59.472315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.295 [2024-11-20 07:27:59.472327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.295 [2024-11-20 07:27:59.472335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.295 [2024-11-20 07:27:59.472345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.295 [2024-11-20 07:27:59.472355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.295 [2024-11-20 07:27:59.472368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.295 [2024-11-20 07:27:59.472377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.295 [2024-11-20 07:27:59.472390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.295 [2024-11-20 07:27:59.472401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.295 [2024-11-20 07:27:59.472412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.295 [2024-11-20 07:27:59.472421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.295 [2024-11-20 07:27:59.472437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.295 [2024-11-20 07:27:59.472448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.295 [2024-11-20 07:27:59.472461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.295 [2024-11-20 07:27:59.472472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.295 [2024-11-20 07:27:59.472484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.295 [2024-11-20 07:27:59.472491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.295 [2024-11-20 07:27:59.472501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.295 [2024-11-20 07:27:59.472508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.295 [2024-11-20 07:27:59.472518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.295 [2024-11-20 07:27:59.472526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.295 [2024-11-20 07:27:59.472535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.295 [2024-11-20 07:27:59.472543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.295 [2024-11-20 07:27:59.472553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.472991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.472998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.473007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.473015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.473024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.473031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.473040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.473048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.473057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.473065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.473074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.473081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.473092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.473100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.473110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.473117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.296 [2024-11-20 07:27:59.473126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.296 [2024-11-20 07:27:59.473133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.297 [2024-11-20 07:27:59.473794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.297 [2024-11-20 07:27:59.473804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.473811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.473820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.473827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.473837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.473844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.473853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.473861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.473870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.473877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.473886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.473893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.473903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.473910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.473919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.473926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.473936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.473945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.473954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.473962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.473971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.473978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.473988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.473995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.474012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.474029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.474046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.474062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.474079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.474096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.474112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.474129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.474146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.474169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.474185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.474202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.298 [2024-11-20 07:27:59.474219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 07:27:59.474235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 07:27:59.474252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 07:27:59.474271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 07:27:59.474288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 07:27:59.474305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 07:27:59.474322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 07:27:59.474339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 07:27:59.474356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 07:27:59.474373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 07:27:59.474392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 07:27:59.474409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 07:27:59.474426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 07:27:59.474443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 07:27:59.474459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 07:27:59.474468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8e1d0 is same with the state(6) to be set 00:29:37.299 [2024-11-20 07:27:59.474478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.299 [2024-11-20 07:27:59.474484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.299 [2024-11-20 07:27:59.474490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96432 len:8 PRP1 0x0 PRP2 0x0 00:29:37.299 [2024-11-20 07:27:59.474498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 07:27:59.478078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.299 [2024-11-20 07:27:59.478130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.299 [2024-11-20 07:27:59.478888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.299 [2024-11-20 07:27:59.478906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.299 [2024-11-20 07:27:59.478915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.299 [2024-11-20 07:27:59.479132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.299 [2024-11-20 07:27:59.479354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.299 [2024-11-20 07:27:59.479364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.299 [2024-11-20 07:27:59.479373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.299 [2024-11-20 07:27:59.479382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.299 [2024-11-20 07:27:59.492137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.299 [2024-11-20 07:27:59.492784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.299 [2024-11-20 07:27:59.492823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.299 [2024-11-20 07:27:59.492839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.299 [2024-11-20 07:27:59.493079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.299 [2024-11-20 07:27:59.493310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.299 [2024-11-20 07:27:59.493320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.299 [2024-11-20 07:27:59.493329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.299 [2024-11-20 07:27:59.493337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.299 [2024-11-20 07:27:59.505903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.299 [2024-11-20 07:27:59.506445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.299 [2024-11-20 07:27:59.506465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.299 [2024-11-20 07:27:59.506473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.299 [2024-11-20 07:27:59.506690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.299 [2024-11-20 07:27:59.506906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.299 [2024-11-20 07:27:59.506914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.299 [2024-11-20 07:27:59.506922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.299 [2024-11-20 07:27:59.506929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.299 [2024-11-20 07:27:59.519714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.299 [2024-11-20 07:27:59.520243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.299 [2024-11-20 07:27:59.520262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.299 [2024-11-20 07:27:59.520270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.299 [2024-11-20 07:27:59.520486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.299 [2024-11-20 07:27:59.520701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.299 [2024-11-20 07:27:59.520711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.299 [2024-11-20 07:27:59.520718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.299 [2024-11-20 07:27:59.520726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.299 [2024-11-20 07:27:59.533495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.299 [2024-11-20 07:27:59.533977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.299 [2024-11-20 07:27:59.533995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.299 [2024-11-20 07:27:59.534002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.299 [2024-11-20 07:27:59.534225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.299 [2024-11-20 07:27:59.534445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.299 [2024-11-20 07:27:59.534454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.299 [2024-11-20 07:27:59.534462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.299 [2024-11-20 07:27:59.534469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.299 [2024-11-20 07:27:59.547235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.299 [2024-11-20 07:27:59.547763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.299 [2024-11-20 07:27:59.547782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.299 [2024-11-20 07:27:59.547789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.299 [2024-11-20 07:27:59.548005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.299 [2024-11-20 07:27:59.548227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.299 [2024-11-20 07:27:59.548237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.299 [2024-11-20 07:27:59.548244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.299 [2024-11-20 07:27:59.548251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.299 [2024-11-20 07:27:59.561021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.299 [2024-11-20 07:27:59.561721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.299 [2024-11-20 07:27:59.561766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.299 [2024-11-20 07:27:59.561778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.299 [2024-11-20 07:27:59.562021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.299 [2024-11-20 07:27:59.562253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.299 [2024-11-20 07:27:59.562265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.299 [2024-11-20 07:27:59.562273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.299 [2024-11-20 07:27:59.562282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.562 [2024-11-20 07:27:59.574890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.562 [2024-11-20 07:27:59.575559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-20 07:27:59.575607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.562 [2024-11-20 07:27:59.575620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.562 [2024-11-20 07:27:59.575862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.562 [2024-11-20 07:27:59.576084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.562 [2024-11-20 07:27:59.576093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.562 [2024-11-20 07:27:59.576108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.562 [2024-11-20 07:27:59.576117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.562 [2024-11-20 07:27:59.588719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.562 [2024-11-20 07:27:59.589258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-20 07:27:59.589282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.562 [2024-11-20 07:27:59.589291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.562 [2024-11-20 07:27:59.589508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.562 [2024-11-20 07:27:59.589725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.562 [2024-11-20 07:27:59.589735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.562 [2024-11-20 07:27:59.589742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.562 [2024-11-20 07:27:59.589750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.562 [2024-11-20 07:27:59.602544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.562 [2024-11-20 07:27:59.603097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-20 07:27:59.603117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.562 [2024-11-20 07:27:59.603125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.562 [2024-11-20 07:27:59.603349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.562 [2024-11-20 07:27:59.603567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.562 [2024-11-20 07:27:59.603576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.562 [2024-11-20 07:27:59.603583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.562 [2024-11-20 07:27:59.603591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.562 [2024-11-20 07:27:59.616406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.562 [2024-11-20 07:27:59.616958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-20 07:27:59.616980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.562 [2024-11-20 07:27:59.616988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.562 [2024-11-20 07:27:59.617213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.562 [2024-11-20 07:27:59.617431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.562 [2024-11-20 07:27:59.617439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.562 [2024-11-20 07:27:59.617447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.562 [2024-11-20 07:27:59.617455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.562 [2024-11-20 07:27:59.630271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.562 [2024-11-20 07:27:59.630790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.562 [2024-11-20 07:27:59.630811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.562 [2024-11-20 07:27:59.630819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.562 [2024-11-20 07:27:59.631037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.562 [2024-11-20 07:27:59.631266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.562 [2024-11-20 07:27:59.631275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.563 [2024-11-20 07:27:59.631283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.563 [2024-11-20 07:27:59.631290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.563 [2024-11-20 07:27:59.644410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.563 [2024-11-20 07:27:59.644996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-20 07:27:59.645021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.563 [2024-11-20 07:27:59.645029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.563 [2024-11-20 07:27:59.645260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.563 [2024-11-20 07:27:59.645479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.563 [2024-11-20 07:27:59.645496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.563 [2024-11-20 07:27:59.645504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.563 [2024-11-20 07:27:59.645512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.563 [2024-11-20 07:27:59.658332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.563 [2024-11-20 07:27:59.658980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-20 07:27:59.659042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.563 [2024-11-20 07:27:59.659055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.563 [2024-11-20 07:27:59.659323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.563 [2024-11-20 07:27:59.659549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.563 [2024-11-20 07:27:59.659560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.563 [2024-11-20 07:27:59.659569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.563 [2024-11-20 07:27:59.659579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.563 [2024-11-20 07:27:59.672235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.563 [2024-11-20 07:27:59.672790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-20 07:27:59.672818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.563 [2024-11-20 07:27:59.672835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.563 [2024-11-20 07:27:59.673056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.563 [2024-11-20 07:27:59.673287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.563 [2024-11-20 07:27:59.673297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.563 [2024-11-20 07:27:59.673305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.563 [2024-11-20 07:27:59.673312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.563 [2024-11-20 07:27:59.686115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.563 [2024-11-20 07:27:59.686687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-20 07:27:59.686712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.563 [2024-11-20 07:27:59.686720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.563 [2024-11-20 07:27:59.686938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.563 [2024-11-20 07:27:59.687156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.563 [2024-11-20 07:27:59.687177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.563 [2024-11-20 07:27:59.687185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.563 [2024-11-20 07:27:59.687192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.563 [2024-11-20 07:27:59.700008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.563 [2024-11-20 07:27:59.700676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-20 07:27:59.700738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.563 [2024-11-20 07:27:59.700751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.563 [2024-11-20 07:27:59.701003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.563 [2024-11-20 07:27:59.701242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.563 [2024-11-20 07:27:59.701253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.563 [2024-11-20 07:27:59.701263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.563 [2024-11-20 07:27:59.701272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.563 [2024-11-20 07:27:59.713877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.563 [2024-11-20 07:27:59.714607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-20 07:27:59.714671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.563 [2024-11-20 07:27:59.714683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.563 [2024-11-20 07:27:59.714936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.563 [2024-11-20 07:27:59.715183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.563 [2024-11-20 07:27:59.715194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.563 [2024-11-20 07:27:59.715203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.563 [2024-11-20 07:27:59.715212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.563 [2024-11-20 07:27:59.727806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.563 [2024-11-20 07:27:59.728391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-20 07:27:59.728420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.563 [2024-11-20 07:27:59.728429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.563 [2024-11-20 07:27:59.728650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.563 [2024-11-20 07:27:59.728869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.563 [2024-11-20 07:27:59.728879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.563 [2024-11-20 07:27:59.728888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.563 [2024-11-20 07:27:59.728897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.563 [2024-11-20 07:27:59.741688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.563 [2024-11-20 07:27:59.742233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-20 07:27:59.742258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.563 [2024-11-20 07:27:59.742266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.563 [2024-11-20 07:27:59.742486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.563 [2024-11-20 07:27:59.742704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.563 [2024-11-20 07:27:59.742715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.563 [2024-11-20 07:27:59.742723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.563 [2024-11-20 07:27:59.742732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.563 [2024-11-20 07:27:59.755523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.563 [2024-11-20 07:27:59.756088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.563 [2024-11-20 07:27:59.756110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.563 [2024-11-20 07:27:59.756119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.563 [2024-11-20 07:27:59.756345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.563 [2024-11-20 07:27:59.756564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.564 [2024-11-20 07:27:59.756574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.564 [2024-11-20 07:27:59.756589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.564 [2024-11-20 07:27:59.756598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.564 [2024-11-20 07:27:59.769387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.564 [2024-11-20 07:27:59.769955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-20 07:27:59.769978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.564 [2024-11-20 07:27:59.769987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.564 [2024-11-20 07:27:59.770212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.564 [2024-11-20 07:27:59.770432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.564 [2024-11-20 07:27:59.770442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.564 [2024-11-20 07:27:59.770449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.564 [2024-11-20 07:27:59.770457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.564 [2024-11-20 07:27:59.783236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.564 [2024-11-20 07:27:59.783914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-20 07:27:59.783977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.564 [2024-11-20 07:27:59.783990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.564 [2024-11-20 07:27:59.784253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.564 [2024-11-20 07:27:59.784478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.564 [2024-11-20 07:27:59.784490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.564 [2024-11-20 07:27:59.784498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.564 [2024-11-20 07:27:59.784507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.564 [2024-11-20 07:27:59.797112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.564 [2024-11-20 07:27:59.797773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-20 07:27:59.797838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.564 [2024-11-20 07:27:59.797850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.564 [2024-11-20 07:27:59.798103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.564 [2024-11-20 07:27:59.798343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.564 [2024-11-20 07:27:59.798354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.564 [2024-11-20 07:27:59.798363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.564 [2024-11-20 07:27:59.798372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.564 [2024-11-20 07:27:59.810968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.564 [2024-11-20 07:27:59.811626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-20 07:27:59.811690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.564 [2024-11-20 07:27:59.811703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.564 [2024-11-20 07:27:59.811956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.564 [2024-11-20 07:27:59.812195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.564 [2024-11-20 07:27:59.812205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.564 [2024-11-20 07:27:59.812214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.564 [2024-11-20 07:27:59.812223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.564 [2024-11-20 07:27:59.824844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.564 [2024-11-20 07:27:59.826288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.564 [2024-11-20 07:27:59.826329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.564 [2024-11-20 07:27:59.826340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.564 [2024-11-20 07:27:59.826580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.564 [2024-11-20 07:27:59.826803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.564 [2024-11-20 07:27:59.826813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.564 [2024-11-20 07:27:59.826821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.564 [2024-11-20 07:27:59.826830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.828 [2024-11-20 07:27:59.838607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.828 [2024-11-20 07:27:59.839193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.828 [2024-11-20 07:27:59.839219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.828 [2024-11-20 07:27:59.839228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.828 [2024-11-20 07:27:59.839448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.828 [2024-11-20 07:27:59.839668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.828 [2024-11-20 07:27:59.839678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.828 [2024-11-20 07:27:59.839686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.828 [2024-11-20 07:27:59.839694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.828 [2024-11-20 07:27:59.852513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.828 [2024-11-20 07:27:59.853198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.828 [2024-11-20 07:27:59.853262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.828 [2024-11-20 07:27:59.853283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.828 [2024-11-20 07:27:59.853538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.828 [2024-11-20 07:27:59.853764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.828 [2024-11-20 07:27:59.853775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.828 [2024-11-20 07:27:59.853783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.828 [2024-11-20 07:27:59.853794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.828 [2024-11-20 07:27:59.866418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.828 [2024-11-20 07:27:59.866995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.828 [2024-11-20 07:27:59.867025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.828 [2024-11-20 07:27:59.867034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.828 [2024-11-20 07:27:59.867262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.828 [2024-11-20 07:27:59.867481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.828 [2024-11-20 07:27:59.867492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.828 [2024-11-20 07:27:59.867499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.828 [2024-11-20 07:27:59.867508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.828 [2024-11-20 07:27:59.880306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.828 [2024-11-20 07:27:59.880911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.828 [2024-11-20 07:27:59.880935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.828 [2024-11-20 07:27:59.880944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.828 [2024-11-20 07:27:59.881171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.828 [2024-11-20 07:27:59.881391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.828 [2024-11-20 07:27:59.881407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.828 [2024-11-20 07:27:59.881415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.828 [2024-11-20 07:27:59.881423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.828 [2024-11-20 07:27:59.894203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.828 [2024-11-20 07:27:59.894899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.828 [2024-11-20 07:27:59.894962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.828 [2024-11-20 07:27:59.894975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.828 [2024-11-20 07:27:59.895240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.828 [2024-11-20 07:27:59.895473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.828 [2024-11-20 07:27:59.895483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.828 [2024-11-20 07:27:59.895492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.828 [2024-11-20 07:27:59.895501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.828 [2024-11-20 07:27:59.908087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.828 [2024-11-20 07:27:59.908681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.828 [2024-11-20 07:27:59.908709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.828 [2024-11-20 07:27:59.908718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.828 [2024-11-20 07:27:59.908939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.828 [2024-11-20 07:27:59.909157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.828 [2024-11-20 07:27:59.909174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.828 [2024-11-20 07:27:59.909182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.828 [2024-11-20 07:27:59.909190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.828 [2024-11-20 07:27:59.921996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.828 [2024-11-20 07:27:59.922544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.828 [2024-11-20 07:27:59.922568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.828 [2024-11-20 07:27:59.922576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.828 [2024-11-20 07:27:59.922795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.828 [2024-11-20 07:27:59.923012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.828 [2024-11-20 07:27:59.923028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.828 [2024-11-20 07:27:59.923037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.828 [2024-11-20 07:27:59.923045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.828 [2024-11-20 07:27:59.935860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.828 [2024-11-20 07:27:59.936508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.828 [2024-11-20 07:27:59.936572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.828 [2024-11-20 07:27:59.936585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.828 [2024-11-20 07:27:59.936837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.828 [2024-11-20 07:27:59.937062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.828 [2024-11-20 07:27:59.937073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.828 [2024-11-20 07:27:59.937081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.828 [2024-11-20 07:27:59.937098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.828 [2024-11-20 07:27:59.949738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.828 [2024-11-20 07:27:59.950341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.828 [2024-11-20 07:27:59.950405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.828 [2024-11-20 07:27:59.950420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.828 [2024-11-20 07:27:59.950674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.828 [2024-11-20 07:27:59.950898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.828 [2024-11-20 07:27:59.950910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.828 [2024-11-20 07:27:59.950918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.828 [2024-11-20 07:27:59.950927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.829 [2024-11-20 07:27:59.963535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.829 [2024-11-20 07:27:59.964131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.829 [2024-11-20 07:27:59.964167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.829 [2024-11-20 07:27:59.964179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.829 [2024-11-20 07:27:59.964399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.829 [2024-11-20 07:27:59.964619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.829 [2024-11-20 07:27:59.964633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.829 [2024-11-20 07:27:59.964643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.829 [2024-11-20 07:27:59.964652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.829 9474.33 IOPS, 37.01 MiB/s [2024-11-20T06:28:00.107Z] [2024-11-20 07:27:59.977305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.829 [2024-11-20 07:27:59.977895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.829 [2024-11-20 07:27:59.977959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.829 [2024-11-20 07:27:59.977972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.829 [2024-11-20 07:27:59.978239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.829 [2024-11-20 07:27:59.978463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.829 [2024-11-20 07:27:59.978473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.829 [2024-11-20 07:27:59.978482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.829 [2024-11-20 07:27:59.978492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.829 [2024-11-20 07:27:59.991108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.829 [2024-11-20 07:27:59.991710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.829 [2024-11-20 07:27:59.991737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.829 [2024-11-20 07:27:59.991746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.829 [2024-11-20 07:27:59.991966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.829 [2024-11-20 07:27:59.992193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.829 [2024-11-20 07:27:59.992205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.829 [2024-11-20 07:27:59.992213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.829 [2024-11-20 07:27:59.992221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.829 [2024-11-20 07:28:00.005537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.829 [2024-11-20 07:28:00.006090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.829 [2024-11-20 07:28:00.006119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.829 [2024-11-20 07:28:00.006129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.829 [2024-11-20 07:28:00.006360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.829 [2024-11-20 07:28:00.006580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.829 [2024-11-20 07:28:00.006590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.829 [2024-11-20 07:28:00.006600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.829 [2024-11-20 07:28:00.006610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.829 [2024-11-20 07:28:00.019481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.829 [2024-11-20 07:28:00.020185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.829 [2024-11-20 07:28:00.020250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.829 [2024-11-20 07:28:00.020263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.829 [2024-11-20 07:28:00.020515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.829 [2024-11-20 07:28:00.020740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.829 [2024-11-20 07:28:00.020752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.829 [2024-11-20 07:28:00.020761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.829 [2024-11-20 07:28:00.020770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.829 [2024-11-20 07:28:00.033368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.829 [2024-11-20 07:28:00.034034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.829 [2024-11-20 07:28:00.034097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.829 [2024-11-20 07:28:00.034120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.829 [2024-11-20 07:28:00.034387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.829 [2024-11-20 07:28:00.034612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.829 [2024-11-20 07:28:00.034624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.829 [2024-11-20 07:28:00.034633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.829 [2024-11-20 07:28:00.034644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.829 [2024-11-20 07:28:00.047249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.829 [2024-11-20 07:28:00.047908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.829 [2024-11-20 07:28:00.047970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.829 [2024-11-20 07:28:00.047984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.829 [2024-11-20 07:28:00.048253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.829 [2024-11-20 07:28:00.048480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.829 [2024-11-20 07:28:00.048491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.829 [2024-11-20 07:28:00.048500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.829 [2024-11-20 07:28:00.048510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.829 [2024-11-20 07:28:00.061129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.829 [2024-11-20 07:28:00.061718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.829 [2024-11-20 07:28:00.061782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.829 [2024-11-20 07:28:00.061795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.829 [2024-11-20 07:28:00.062047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.829 [2024-11-20 07:28:00.062291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.829 [2024-11-20 07:28:00.062304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.829 [2024-11-20 07:28:00.062313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.829 [2024-11-20 07:28:00.062322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.829 [2024-11-20 07:28:00.074943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.829 [2024-11-20 07:28:00.075660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.829 [2024-11-20 07:28:00.075724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.829 [2024-11-20 07:28:00.075736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.829 [2024-11-20 07:28:00.075989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.829 [2024-11-20 07:28:00.076230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.829 [2024-11-20 07:28:00.076242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.829 [2024-11-20 07:28:00.076250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.829 [2024-11-20 07:28:00.076260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.829 [2024-11-20 07:28:00.088875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.829 [2024-11-20 07:28:00.089545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.829 [2024-11-20 07:28:00.089608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:37.829 [2024-11-20 07:28:00.089621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:37.829 [2024-11-20 07:28:00.089873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:37.829 [2024-11-20 07:28:00.090098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.830 [2024-11-20 07:28:00.090109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.830 [2024-11-20 07:28:00.090118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.830 [2024-11-20 07:28:00.090127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.092 [2024-11-20 07:28:00.102743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.092 [2024-11-20 07:28:00.103283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.092 [2024-11-20 07:28:00.103312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.092 [2024-11-20 07:28:00.103321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.092 [2024-11-20 07:28:00.103541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.092 [2024-11-20 07:28:00.103762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.092 [2024-11-20 07:28:00.103773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.092 [2024-11-20 07:28:00.103782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.092 [2024-11-20 07:28:00.103791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.092 [2024-11-20 07:28:00.116625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.092 [2024-11-20 07:28:00.117272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.092 [2024-11-20 07:28:00.117336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.092 [2024-11-20 07:28:00.117349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.092 [2024-11-20 07:28:00.117601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.092 [2024-11-20 07:28:00.117827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.092 [2024-11-20 07:28:00.117837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.092 [2024-11-20 07:28:00.117846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.092 [2024-11-20 07:28:00.117863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.092 [2024-11-20 07:28:00.130464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.092 [2024-11-20 07:28:00.131119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.092 [2024-11-20 07:28:00.131193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.092 [2024-11-20 07:28:00.131208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.092 [2024-11-20 07:28:00.131461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.092 [2024-11-20 07:28:00.131685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.092 [2024-11-20 07:28:00.131695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.092 [2024-11-20 07:28:00.131703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.092 [2024-11-20 07:28:00.131714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.092 [2024-11-20 07:28:00.144325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.092 [2024-11-20 07:28:00.144978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.092 [2024-11-20 07:28:00.145041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.092 [2024-11-20 07:28:00.145054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.092 [2024-11-20 07:28:00.145321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.092 [2024-11-20 07:28:00.145547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.092 [2024-11-20 07:28:00.145557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.092 [2024-11-20 07:28:00.145566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.092 [2024-11-20 07:28:00.145575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.092 [2024-11-20 07:28:00.158222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.092 [2024-11-20 07:28:00.158913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.092 [2024-11-20 07:28:00.158978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.092 [2024-11-20 07:28:00.158990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.092 [2024-11-20 07:28:00.159258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.092 [2024-11-20 07:28:00.159484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.092 [2024-11-20 07:28:00.159494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.093 [2024-11-20 07:28:00.159503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.093 [2024-11-20 07:28:00.159512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.093 [2024-11-20 07:28:00.172167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.093 [2024-11-20 07:28:00.172841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.093 [2024-11-20 07:28:00.172904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.093 [2024-11-20 07:28:00.172917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.093 [2024-11-20 07:28:00.173185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.093 [2024-11-20 07:28:00.173411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.093 [2024-11-20 07:28:00.173420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.093 [2024-11-20 07:28:00.173429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.093 [2024-11-20 07:28:00.173439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.093 [2024-11-20 07:28:00.186017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.093 [2024-11-20 07:28:00.186682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.093 [2024-11-20 07:28:00.186745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.093 [2024-11-20 07:28:00.186758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.093 [2024-11-20 07:28:00.187010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.093 [2024-11-20 07:28:00.187248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.093 [2024-11-20 07:28:00.187258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.093 [2024-11-20 07:28:00.187267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.093 [2024-11-20 07:28:00.187276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.093 [2024-11-20 07:28:00.199879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.093 [2024-11-20 07:28:00.200547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.093 [2024-11-20 07:28:00.200610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.093 [2024-11-20 07:28:00.200623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.093 [2024-11-20 07:28:00.200875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.093 [2024-11-20 07:28:00.201100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.093 [2024-11-20 07:28:00.201110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.093 [2024-11-20 07:28:00.201118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.093 [2024-11-20 07:28:00.201127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.093 [2024-11-20 07:28:00.213730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.093 [2024-11-20 07:28:00.214317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.093 [2024-11-20 07:28:00.214381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.093 [2024-11-20 07:28:00.214395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.093 [2024-11-20 07:28:00.214654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.093 [2024-11-20 07:28:00.214879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.093 [2024-11-20 07:28:00.214889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.093 [2024-11-20 07:28:00.214897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.093 [2024-11-20 07:28:00.214906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.093 [2024-11-20 07:28:00.227525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.093 [2024-11-20 07:28:00.228114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.093 [2024-11-20 07:28:00.228142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.093 [2024-11-20 07:28:00.228151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.093 [2024-11-20 07:28:00.228379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.093 [2024-11-20 07:28:00.228599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.093 [2024-11-20 07:28:00.228608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.093 [2024-11-20 07:28:00.228616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.093 [2024-11-20 07:28:00.228624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.093 [2024-11-20 07:28:00.241398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.093 [2024-11-20 07:28:00.241961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.093 [2024-11-20 07:28:00.241984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.093 [2024-11-20 07:28:00.241993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.093 [2024-11-20 07:28:00.242219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.093 [2024-11-20 07:28:00.242438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.093 [2024-11-20 07:28:00.242448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.093 [2024-11-20 07:28:00.242456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.093 [2024-11-20 07:28:00.242464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.093 [2024-11-20 07:28:00.255240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.093 [2024-11-20 07:28:00.255711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.093 [2024-11-20 07:28:00.255738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.093 [2024-11-20 07:28:00.255747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.093 [2024-11-20 07:28:00.255967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.093 [2024-11-20 07:28:00.256193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.093 [2024-11-20 07:28:00.256211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.093 [2024-11-20 07:28:00.256219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.093 [2024-11-20 07:28:00.256226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.093 [2024-11-20 07:28:00.269012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.093 [2024-11-20 07:28:00.269584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.093 [2024-11-20 07:28:00.269607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.093 [2024-11-20 07:28:00.269616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.093 [2024-11-20 07:28:00.269834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.093 [2024-11-20 07:28:00.270052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.093 [2024-11-20 07:28:00.270062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.093 [2024-11-20 07:28:00.270069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.093 [2024-11-20 07:28:00.270077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.093 [2024-11-20 07:28:00.282868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.093 [2024-11-20 07:28:00.283475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.093 [2024-11-20 07:28:00.283538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.093 [2024-11-20 07:28:00.283551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.093 [2024-11-20 07:28:00.283803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.093 [2024-11-20 07:28:00.284028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.093 [2024-11-20 07:28:00.284037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.094 [2024-11-20 07:28:00.284046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.094 [2024-11-20 07:28:00.284055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.094 [2024-11-20 07:28:00.296644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.094 [2024-11-20 07:28:00.297272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.094 [2024-11-20 07:28:00.297336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.094 [2024-11-20 07:28:00.297349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.094 [2024-11-20 07:28:00.297601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.094 [2024-11-20 07:28:00.297826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.094 [2024-11-20 07:28:00.297836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.094 [2024-11-20 07:28:00.297845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.094 [2024-11-20 07:28:00.297861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.094 [2024-11-20 07:28:00.310461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.094 [2024-11-20 07:28:00.311124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.094 [2024-11-20 07:28:00.311197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.094 [2024-11-20 07:28:00.311210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.094 [2024-11-20 07:28:00.311463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.094 [2024-11-20 07:28:00.311687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.094 [2024-11-20 07:28:00.311697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.094 [2024-11-20 07:28:00.311706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.094 [2024-11-20 07:28:00.311715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.094 [2024-11-20 07:28:00.324331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.094 [2024-11-20 07:28:00.324987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.094 [2024-11-20 07:28:00.325049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.094 [2024-11-20 07:28:00.325062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.094 [2024-11-20 07:28:00.325329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.094 [2024-11-20 07:28:00.325555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.094 [2024-11-20 07:28:00.325564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.094 [2024-11-20 07:28:00.325573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.094 [2024-11-20 07:28:00.325582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.094 [2024-11-20 07:28:00.338172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.094 [2024-11-20 07:28:00.338844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.094 [2024-11-20 07:28:00.338907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.094 [2024-11-20 07:28:00.338920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.094 [2024-11-20 07:28:00.339186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.094 [2024-11-20 07:28:00.339412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.094 [2024-11-20 07:28:00.339422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.094 [2024-11-20 07:28:00.339431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.094 [2024-11-20 07:28:00.339440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.094 [2024-11-20 07:28:00.352022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.094 [2024-11-20 07:28:00.352710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.094 [2024-11-20 07:28:00.352774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.094 [2024-11-20 07:28:00.352787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.094 [2024-11-20 07:28:00.353038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.094 [2024-11-20 07:28:00.353278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.094 [2024-11-20 07:28:00.353289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.094 [2024-11-20 07:28:00.353297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.094 [2024-11-20 07:28:00.353306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.357 [2024-11-20 07:28:00.365892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.357 [2024-11-20 07:28:00.366535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-11-20 07:28:00.366598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.357 [2024-11-20 07:28:00.366611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.357 [2024-11-20 07:28:00.366863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.357 [2024-11-20 07:28:00.367088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.357 [2024-11-20 07:28:00.367098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.357 [2024-11-20 07:28:00.367106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.357 [2024-11-20 07:28:00.367116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.357 [2024-11-20 07:28:00.379739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.357 [2024-11-20 07:28:00.380322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-11-20 07:28:00.380386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.357 [2024-11-20 07:28:00.380400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.357 [2024-11-20 07:28:00.380654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.357 [2024-11-20 07:28:00.380878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.357 [2024-11-20 07:28:00.380889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.357 [2024-11-20 07:28:00.380897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.357 [2024-11-20 07:28:00.380907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.357 [2024-11-20 07:28:00.393504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.357 [2024-11-20 07:28:00.394093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-11-20 07:28:00.394121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.357 [2024-11-20 07:28:00.394131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.357 [2024-11-20 07:28:00.394365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.357 [2024-11-20 07:28:00.394586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.357 [2024-11-20 07:28:00.394595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.357 [2024-11-20 07:28:00.394603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.357 [2024-11-20 07:28:00.394610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.357 [2024-11-20 07:28:00.407378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.357 [2024-11-20 07:28:00.407958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-11-20 07:28:00.407981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.357 [2024-11-20 07:28:00.407990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.357 [2024-11-20 07:28:00.408217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.357 [2024-11-20 07:28:00.408436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.357 [2024-11-20 07:28:00.408446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.357 [2024-11-20 07:28:00.408453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.357 [2024-11-20 07:28:00.408461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.357 [2024-11-20 07:28:00.421147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.357 [2024-11-20 07:28:00.421720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-11-20 07:28:00.421744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.357 [2024-11-20 07:28:00.421752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.357 [2024-11-20 07:28:00.421970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.357 [2024-11-20 07:28:00.422196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.357 [2024-11-20 07:28:00.422208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.357 [2024-11-20 07:28:00.422215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.357 [2024-11-20 07:28:00.422223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.357 [2024-11-20 07:28:00.434994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.357 [2024-11-20 07:28:00.435562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-11-20 07:28:00.435585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.357 [2024-11-20 07:28:00.435594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.357 [2024-11-20 07:28:00.435811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.357 [2024-11-20 07:28:00.436029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.357 [2024-11-20 07:28:00.436045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.357 [2024-11-20 07:28:00.436053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.357 [2024-11-20 07:28:00.436062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.357 [2024-11-20 07:28:00.448835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.357 [2024-11-20 07:28:00.449498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-11-20 07:28:00.449561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.357 [2024-11-20 07:28:00.449575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.357 [2024-11-20 07:28:00.449827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.357 [2024-11-20 07:28:00.450051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.357 [2024-11-20 07:28:00.450061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.357 [2024-11-20 07:28:00.450070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.357 [2024-11-20 07:28:00.450079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.357 [2024-11-20 07:28:00.462669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.357 [2024-11-20 07:28:00.463262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-11-20 07:28:00.463325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.358 [2024-11-20 07:28:00.463338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.358 [2024-11-20 07:28:00.463591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.358 [2024-11-20 07:28:00.463815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.358 [2024-11-20 07:28:00.463825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.358 [2024-11-20 07:28:00.463833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.358 [2024-11-20 07:28:00.463842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.358 [2024-11-20 07:28:00.476455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.358 [2024-11-20 07:28:00.477055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-11-20 07:28:00.477116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.358 [2024-11-20 07:28:00.477129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.358 [2024-11-20 07:28:00.477397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.358 [2024-11-20 07:28:00.477625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.358 [2024-11-20 07:28:00.477636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.358 [2024-11-20 07:28:00.477645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.358 [2024-11-20 07:28:00.477669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.358 [2024-11-20 07:28:00.489081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.358 [2024-11-20 07:28:00.489606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-11-20 07:28:00.489664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.358 [2024-11-20 07:28:00.489674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.358 [2024-11-20 07:28:00.489855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.358 [2024-11-20 07:28:00.490012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.358 [2024-11-20 07:28:00.490019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.358 [2024-11-20 07:28:00.490026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.358 [2024-11-20 07:28:00.490033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.358 [2024-11-20 07:28:00.501806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.358 [2024-11-20 07:28:00.502193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-11-20 07:28:00.502218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.358 [2024-11-20 07:28:00.502225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.358 [2024-11-20 07:28:00.502379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.358 [2024-11-20 07:28:00.502531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.358 [2024-11-20 07:28:00.502537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.358 [2024-11-20 07:28:00.502543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.358 [2024-11-20 07:28:00.502549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.358 [2024-11-20 07:28:00.514420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.358 [2024-11-20 07:28:00.514928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-11-20 07:28:00.514948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.358 [2024-11-20 07:28:00.514954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.358 [2024-11-20 07:28:00.515104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.358 [2024-11-20 07:28:00.515262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.358 [2024-11-20 07:28:00.515270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.358 [2024-11-20 07:28:00.515275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.358 [2024-11-20 07:28:00.515281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.358 [2024-11-20 07:28:00.527001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.358 [2024-11-20 07:28:00.527581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-11-20 07:28:00.527629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.358 [2024-11-20 07:28:00.527638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.358 [2024-11-20 07:28:00.527810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.358 [2024-11-20 07:28:00.527964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.358 [2024-11-20 07:28:00.527971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.358 [2024-11-20 07:28:00.527977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.358 [2024-11-20 07:28:00.527984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.358 [2024-11-20 07:28:00.539573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.358 [2024-11-20 07:28:00.540141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-11-20 07:28:00.540188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.358 [2024-11-20 07:28:00.540198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.358 [2024-11-20 07:28:00.540370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.358 [2024-11-20 07:28:00.540524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.358 [2024-11-20 07:28:00.540531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.358 [2024-11-20 07:28:00.540536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.358 [2024-11-20 07:28:00.540542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.358 [2024-11-20 07:28:00.552260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.358 [2024-11-20 07:28:00.552810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-11-20 07:28:00.552849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.358 [2024-11-20 07:28:00.552857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.358 [2024-11-20 07:28:00.553027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.358 [2024-11-20 07:28:00.553189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.358 [2024-11-20 07:28:00.553198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.358 [2024-11-20 07:28:00.553203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.358 [2024-11-20 07:28:00.553209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.358 [2024-11-20 07:28:00.564916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.358 [2024-11-20 07:28:00.565500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-11-20 07:28:00.565536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.358 [2024-11-20 07:28:00.565544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.358 [2024-11-20 07:28:00.565716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.358 [2024-11-20 07:28:00.565869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.358 [2024-11-20 07:28:00.565875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.358 [2024-11-20 07:28:00.565881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.358 [2024-11-20 07:28:00.565887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.358 [2024-11-20 07:28:00.577618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.358 [2024-11-20 07:28:00.578136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-11-20 07:28:00.578177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.358 [2024-11-20 07:28:00.578185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.358 [2024-11-20 07:28:00.578352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.358 [2024-11-20 07:28:00.578505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.358 [2024-11-20 07:28:00.578512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.358 [2024-11-20 07:28:00.578518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.358 [2024-11-20 07:28:00.578524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.358 [2024-11-20 07:28:00.590228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.359 [2024-11-20 07:28:00.590789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-11-20 07:28:00.590822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.359 [2024-11-20 07:28:00.590831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.359 [2024-11-20 07:28:00.590997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.359 [2024-11-20 07:28:00.591149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.359 [2024-11-20 07:28:00.591156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.359 [2024-11-20 07:28:00.591170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.359 [2024-11-20 07:28:00.591176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.359 [2024-11-20 07:28:00.602876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.359 [2024-11-20 07:28:00.603450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-11-20 07:28:00.603483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.359 [2024-11-20 07:28:00.603491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.359 [2024-11-20 07:28:00.603657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.359 [2024-11-20 07:28:00.603809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.359 [2024-11-20 07:28:00.603819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.359 [2024-11-20 07:28:00.603825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.359 [2024-11-20 07:28:00.603832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.359 [2024-11-20 07:28:00.615536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.359 [2024-11-20 07:28:00.616087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-11-20 07:28:00.616118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.359 [2024-11-20 07:28:00.616126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.359 [2024-11-20 07:28:00.616301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.359 [2024-11-20 07:28:00.616454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.359 [2024-11-20 07:28:00.616460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.359 [2024-11-20 07:28:00.616466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.359 [2024-11-20 07:28:00.616472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.359 [2024-11-20 07:28:00.628181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.359 [2024-11-20 07:28:00.628710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-11-20 07:28:00.628741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.359 [2024-11-20 07:28:00.628749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.359 [2024-11-20 07:28:00.628914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.359 [2024-11-20 07:28:00.629065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.359 [2024-11-20 07:28:00.629071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.359 [2024-11-20 07:28:00.629077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.359 [2024-11-20 07:28:00.629083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.622 [2024-11-20 07:28:00.640789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.622 [2024-11-20 07:28:00.641335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.622 [2024-11-20 07:28:00.641365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.622 [2024-11-20 07:28:00.641374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.622 [2024-11-20 07:28:00.641539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.622 [2024-11-20 07:28:00.641691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.622 [2024-11-20 07:28:00.641697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.622 [2024-11-20 07:28:00.641703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.622 [2024-11-20 07:28:00.641708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.622 [2024-11-20 07:28:00.653433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.622 [2024-11-20 07:28:00.653994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.622 [2024-11-20 07:28:00.654025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.622 [2024-11-20 07:28:00.654033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.622 [2024-11-20 07:28:00.654205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.622 [2024-11-20 07:28:00.654358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.622 [2024-11-20 07:28:00.654364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.622 [2024-11-20 07:28:00.654370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.622 [2024-11-20 07:28:00.654376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.622 [2024-11-20 07:28:00.666074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.622 [2024-11-20 07:28:00.666660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.622 [2024-11-20 07:28:00.666691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.622 [2024-11-20 07:28:00.666699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.622 [2024-11-20 07:28:00.666864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.622 [2024-11-20 07:28:00.667016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.622 [2024-11-20 07:28:00.667023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.622 [2024-11-20 07:28:00.667029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.622 [2024-11-20 07:28:00.667034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.622 [2024-11-20 07:28:00.678746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.622 [2024-11-20 07:28:00.679247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.622 [2024-11-20 07:28:00.679277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.622 [2024-11-20 07:28:00.679286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.622 [2024-11-20 07:28:00.679452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.622 [2024-11-20 07:28:00.679604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.622 [2024-11-20 07:28:00.679610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.622 [2024-11-20 07:28:00.679616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.622 [2024-11-20 07:28:00.679622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.622 [2024-11-20 07:28:00.691323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.622 [2024-11-20 07:28:00.691847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.622 [2024-11-20 07:28:00.691882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.622 [2024-11-20 07:28:00.691890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.622 [2024-11-20 07:28:00.692054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.622 [2024-11-20 07:28:00.692213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.622 [2024-11-20 07:28:00.692220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.622 [2024-11-20 07:28:00.692226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.623 [2024-11-20 07:28:00.692232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.623 [2024-11-20 07:28:00.703933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.623 [2024-11-20 07:28:00.704538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.623 [2024-11-20 07:28:00.704569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.623 [2024-11-20 07:28:00.704577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.623 [2024-11-20 07:28:00.704742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.623 [2024-11-20 07:28:00.704894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.623 [2024-11-20 07:28:00.704900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.623 [2024-11-20 07:28:00.704906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.623 [2024-11-20 07:28:00.704912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.623 [2024-11-20 07:28:00.716615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.623 [2024-11-20 07:28:00.717164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.623 [2024-11-20 07:28:00.717195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.623 [2024-11-20 07:28:00.717203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.623 [2024-11-20 07:28:00.717367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.623 [2024-11-20 07:28:00.717519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.623 [2024-11-20 07:28:00.717525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.623 [2024-11-20 07:28:00.717531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.623 [2024-11-20 07:28:00.717536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.623 [2024-11-20 07:28:00.729246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.623 [2024-11-20 07:28:00.729801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.623 [2024-11-20 07:28:00.729831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.623 [2024-11-20 07:28:00.729840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.623 [2024-11-20 07:28:00.730007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.623 [2024-11-20 07:28:00.730166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.623 [2024-11-20 07:28:00.730173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.623 [2024-11-20 07:28:00.730179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.623 [2024-11-20 07:28:00.730184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.623 [2024-11-20 07:28:00.741879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.623 [2024-11-20 07:28:00.742536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.623 [2024-11-20 07:28:00.742566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.623 [2024-11-20 07:28:00.742575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.623 [2024-11-20 07:28:00.742739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.623 [2024-11-20 07:28:00.742891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.623 [2024-11-20 07:28:00.742897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.623 [2024-11-20 07:28:00.742903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.623 [2024-11-20 07:28:00.742909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.623 [2024-11-20 07:28:00.754469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.623 [2024-11-20 07:28:00.755012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.623 [2024-11-20 07:28:00.755043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.623 [2024-11-20 07:28:00.755052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.623 [2024-11-20 07:28:00.755222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.623 [2024-11-20 07:28:00.755374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.623 [2024-11-20 07:28:00.755381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.623 [2024-11-20 07:28:00.755387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.623 [2024-11-20 07:28:00.755393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.623 [2024-11-20 07:28:00.767085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.623 [2024-11-20 07:28:00.767550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.623 [2024-11-20 07:28:00.767565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.623 [2024-11-20 07:28:00.767571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.623 [2024-11-20 07:28:00.767719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.623 [2024-11-20 07:28:00.767868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.623 [2024-11-20 07:28:00.767874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.623 [2024-11-20 07:28:00.767883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.623 [2024-11-20 07:28:00.767888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.623 [2024-11-20 07:28:00.779734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.623 [2024-11-20 07:28:00.780311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.623 [2024-11-20 07:28:00.780341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.623 [2024-11-20 07:28:00.780350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.623 [2024-11-20 07:28:00.780514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.623 [2024-11-20 07:28:00.780666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.623 [2024-11-20 07:28:00.780672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.623 [2024-11-20 07:28:00.780678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.623 [2024-11-20 07:28:00.780683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.623 [2024-11-20 07:28:00.792386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.623 [2024-11-20 07:28:00.792910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.623 [2024-11-20 07:28:00.792941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.623 [2024-11-20 07:28:00.792949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.623 [2024-11-20 07:28:00.793113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.623 [2024-11-20 07:28:00.793272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.623 [2024-11-20 07:28:00.793280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.623 [2024-11-20 07:28:00.793286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.623 [2024-11-20 07:28:00.793292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.623 [2024-11-20 07:28:00.804987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.623 [2024-11-20 07:28:00.805541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.623 [2024-11-20 07:28:00.805572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.623 [2024-11-20 07:28:00.805580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.623 [2024-11-20 07:28:00.805744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.624 [2024-11-20 07:28:00.805896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.624 [2024-11-20 07:28:00.805902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.624 [2024-11-20 07:28:00.805908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.624 [2024-11-20 07:28:00.805914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.624 [2024-11-20 07:28:00.817624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.624 [2024-11-20 07:28:00.818176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.624 [2024-11-20 07:28:00.818206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.624 [2024-11-20 07:28:00.818215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.624 [2024-11-20 07:28:00.818379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.624 [2024-11-20 07:28:00.818531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.624 [2024-11-20 07:28:00.818537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.624 [2024-11-20 07:28:00.818543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.624 [2024-11-20 07:28:00.818549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.624 [2024-11-20 07:28:00.830258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.624 [2024-11-20 07:28:00.830812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.624 [2024-11-20 07:28:00.830842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.624 [2024-11-20 07:28:00.830851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.624 [2024-11-20 07:28:00.831015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.624 [2024-11-20 07:28:00.831175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.624 [2024-11-20 07:28:00.831182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.624 [2024-11-20 07:28:00.831188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.624 [2024-11-20 07:28:00.831194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.624 [2024-11-20 07:28:00.842898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.624 [2024-11-20 07:28:00.843460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.624 [2024-11-20 07:28:00.843490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.624 [2024-11-20 07:28:00.843499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.624 [2024-11-20 07:28:00.843663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.624 [2024-11-20 07:28:00.843815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.624 [2024-11-20 07:28:00.843822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.624 [2024-11-20 07:28:00.843827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.624 [2024-11-20 07:28:00.843833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.624 [2024-11-20 07:28:00.855529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.624 [2024-11-20 07:28:00.856070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.624 [2024-11-20 07:28:00.856101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.624 [2024-11-20 07:28:00.856112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.624 [2024-11-20 07:28:00.856284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.624 [2024-11-20 07:28:00.856437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.624 [2024-11-20 07:28:00.856443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.624 [2024-11-20 07:28:00.856449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.624 [2024-11-20 07:28:00.856455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.624 [2024-11-20 07:28:00.868150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.624 [2024-11-20 07:28:00.868681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.624 [2024-11-20 07:28:00.868711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.624 [2024-11-20 07:28:00.868720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.624 [2024-11-20 07:28:00.868885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.624 [2024-11-20 07:28:00.869036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.624 [2024-11-20 07:28:00.869043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.624 [2024-11-20 07:28:00.869048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.624 [2024-11-20 07:28:00.869054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.624 [2024-11-20 07:28:00.880764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.624 [2024-11-20 07:28:00.881258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.624 [2024-11-20 07:28:00.881289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.624 [2024-11-20 07:28:00.881298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.624 [2024-11-20 07:28:00.881464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.624 [2024-11-20 07:28:00.881616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.624 [2024-11-20 07:28:00.881623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.624 [2024-11-20 07:28:00.881628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.624 [2024-11-20 07:28:00.881634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.624 [2024-11-20 07:28:00.893350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.624 [2024-11-20 07:28:00.893935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.624 [2024-11-20 07:28:00.893965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.624 [2024-11-20 07:28:00.893974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.624 [2024-11-20 07:28:00.894138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.624 [2024-11-20 07:28:00.894300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.624 [2024-11-20 07:28:00.894308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.624 [2024-11-20 07:28:00.894313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.624 [2024-11-20 07:28:00.894319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.888 [2024-11-20 07:28:00.906020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.888 [2024-11-20 07:28:00.906488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.888 [2024-11-20 07:28:00.906503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.888 [2024-11-20 07:28:00.906509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.888 [2024-11-20 07:28:00.906658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.888 [2024-11-20 07:28:00.906807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.888 [2024-11-20 07:28:00.906813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.888 [2024-11-20 07:28:00.906819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.888 [2024-11-20 07:28:00.906824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.888 [2024-11-20 07:28:00.918666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.888 [2024-11-20 07:28:00.919200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.888 [2024-11-20 07:28:00.919230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.888 [2024-11-20 07:28:00.919239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.888 [2024-11-20 07:28:00.919406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.888 [2024-11-20 07:28:00.919558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.888 [2024-11-20 07:28:00.919564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.888 [2024-11-20 07:28:00.919570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.888 [2024-11-20 07:28:00.919576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.888 [2024-11-20 07:28:00.931309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.888 [2024-11-20 07:28:00.931874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.888 [2024-11-20 07:28:00.931904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.888 [2024-11-20 07:28:00.931913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.888 [2024-11-20 07:28:00.932077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.888 [2024-11-20 07:28:00.932236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.888 [2024-11-20 07:28:00.932243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.888 [2024-11-20 07:28:00.932252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.888 [2024-11-20 07:28:00.932258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.888 [2024-11-20 07:28:00.943955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.888 [2024-11-20 07:28:00.944472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.888 [2024-11-20 07:28:00.944503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.888 [2024-11-20 07:28:00.944512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.888 [2024-11-20 07:28:00.944676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.888 [2024-11-20 07:28:00.944828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.888 [2024-11-20 07:28:00.944834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.888 [2024-11-20 07:28:00.944840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.888 [2024-11-20 07:28:00.944845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.888 [2024-11-20 07:28:00.956560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.888 [2024-11-20 07:28:00.957114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.888 [2024-11-20 07:28:00.957144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.888 [2024-11-20 07:28:00.957153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.888 [2024-11-20 07:28:00.957325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.888 [2024-11-20 07:28:00.957478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.888 [2024-11-20 07:28:00.957485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.888 [2024-11-20 07:28:00.957490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.888 [2024-11-20 07:28:00.957496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.888 [2024-11-20 07:28:00.969197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.888 [2024-11-20 07:28:00.969752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.888 [2024-11-20 07:28:00.969782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.888 [2024-11-20 07:28:00.969791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.888 [2024-11-20 07:28:00.969955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.888 [2024-11-20 07:28:00.970107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.888 [2024-11-20 07:28:00.970113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.889 [2024-11-20 07:28:00.970119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.889 [2024-11-20 07:28:00.970124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.889 7105.75 IOPS, 27.76 MiB/s [2024-11-20T06:28:01.167Z] [2024-11-20 07:28:00.981854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.889 [2024-11-20 07:28:00.982421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.889 [2024-11-20 07:28:00.982452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.889 [2024-11-20 07:28:00.982460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.889 [2024-11-20 07:28:00.982625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.889 [2024-11-20 07:28:00.982776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.889 [2024-11-20 07:28:00.982782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.889 [2024-11-20 07:28:00.982788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.889 [2024-11-20 07:28:00.982794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.889 [2024-11-20 07:28:00.994497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.889 [2024-11-20 07:28:00.994914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.889 [2024-11-20 07:28:00.994929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.889 [2024-11-20 07:28:00.994934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.889 [2024-11-20 07:28:00.995082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.889 [2024-11-20 07:28:00.995235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.889 [2024-11-20 07:28:00.995241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.889 [2024-11-20 07:28:00.995247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.889 [2024-11-20 07:28:00.995253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.889 [2024-11-20 07:28:01.007081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.889 [2024-11-20 07:28:01.007568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.889 [2024-11-20 07:28:01.007581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.889 [2024-11-20 07:28:01.007587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.889 [2024-11-20 07:28:01.007735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.889 [2024-11-20 07:28:01.007883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.889 [2024-11-20 07:28:01.007889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.889 [2024-11-20 07:28:01.007894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.889 [2024-11-20 07:28:01.007899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.889 [2024-11-20 07:28:01.019724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.889 [2024-11-20 07:28:01.020170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.889 [2024-11-20 07:28:01.020183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.889 [2024-11-20 07:28:01.020192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.889 [2024-11-20 07:28:01.020341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.889 [2024-11-20 07:28:01.020489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.889 [2024-11-20 07:28:01.020495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.889 [2024-11-20 07:28:01.020500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.889 [2024-11-20 07:28:01.020505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.889 [2024-11-20 07:28:01.032348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.889 [2024-11-20 07:28:01.032889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.889 [2024-11-20 07:28:01.032919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.889 [2024-11-20 07:28:01.032927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.889 [2024-11-20 07:28:01.033092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.889 [2024-11-20 07:28:01.033249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.889 [2024-11-20 07:28:01.033256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.889 [2024-11-20 07:28:01.033262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.889 [2024-11-20 07:28:01.033268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.889 [2024-11-20 07:28:01.044965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.889 [2024-11-20 07:28:01.045398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.889 [2024-11-20 07:28:01.045414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.889 [2024-11-20 07:28:01.045419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.889 [2024-11-20 07:28:01.045568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.889 [2024-11-20 07:28:01.045717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.889 [2024-11-20 07:28:01.045723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.889 [2024-11-20 07:28:01.045728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.889 [2024-11-20 07:28:01.045733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.889 [2024-11-20 07:28:01.057568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.889 [2024-11-20 07:28:01.058035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.889 [2024-11-20 07:28:01.058047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.889 [2024-11-20 07:28:01.058052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.889 [2024-11-20 07:28:01.058204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.889 [2024-11-20 07:28:01.058357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.889 [2024-11-20 07:28:01.058363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.889 [2024-11-20 07:28:01.058369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.889 [2024-11-20 07:28:01.058373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.889 [2024-11-20 07:28:01.070214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.889 [2024-11-20 07:28:01.070694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.889 [2024-11-20 07:28:01.070707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.889 [2024-11-20 07:28:01.070712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.889 [2024-11-20 07:28:01.070860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.889 [2024-11-20 07:28:01.071009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.889 [2024-11-20 07:28:01.071015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.889 [2024-11-20 07:28:01.071020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.889 [2024-11-20 07:28:01.071024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.889 [2024-11-20 07:28:01.082873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.889 [2024-11-20 07:28:01.083479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.889 [2024-11-20 07:28:01.083509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.889 [2024-11-20 07:28:01.083518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.889 [2024-11-20 07:28:01.083685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.889 [2024-11-20 07:28:01.083837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.889 [2024-11-20 07:28:01.083843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.890 [2024-11-20 07:28:01.083849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.890 [2024-11-20 07:28:01.083855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.890 [2024-11-20 07:28:01.095566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.890 [2024-11-20 07:28:01.096028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.890 [2024-11-20 07:28:01.096043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.890 [2024-11-20 07:28:01.096049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.890 [2024-11-20 07:28:01.096204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.890 [2024-11-20 07:28:01.096354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.890 [2024-11-20 07:28:01.096361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.890 [2024-11-20 07:28:01.096373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.890 [2024-11-20 07:28:01.096378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.890 [2024-11-20 07:28:01.108212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.890 [2024-11-20 07:28:01.108791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.890 [2024-11-20 07:28:01.108821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.890 [2024-11-20 07:28:01.108829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.890 [2024-11-20 07:28:01.108994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.890 [2024-11-20 07:28:01.109146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.890 [2024-11-20 07:28:01.109152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.890 [2024-11-20 07:28:01.109165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.890 [2024-11-20 07:28:01.109172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.890 [2024-11-20 07:28:01.120873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.890 [2024-11-20 07:28:01.121507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.890 [2024-11-20 07:28:01.121538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.890 [2024-11-20 07:28:01.121546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.890 [2024-11-20 07:28:01.121714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.890 [2024-11-20 07:28:01.121865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.890 [2024-11-20 07:28:01.121872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.890 [2024-11-20 07:28:01.121878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.890 [2024-11-20 07:28:01.121884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.890 [2024-11-20 07:28:01.133501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.890 [2024-11-20 07:28:01.134021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.890 [2024-11-20 07:28:01.134051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.890 [2024-11-20 07:28:01.134060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.890 [2024-11-20 07:28:01.134230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.890 [2024-11-20 07:28:01.134383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.890 [2024-11-20 07:28:01.134389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.890 [2024-11-20 07:28:01.134395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.890 [2024-11-20 07:28:01.134400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.890 [2024-11-20 07:28:01.146114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.890 [2024-11-20 07:28:01.146674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.890 [2024-11-20 07:28:01.146705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.890 [2024-11-20 07:28:01.146714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.890 [2024-11-20 07:28:01.146881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.890 [2024-11-20 07:28:01.147032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.890 [2024-11-20 07:28:01.147039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.890 [2024-11-20 07:28:01.147045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.890 [2024-11-20 07:28:01.147050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.890 [2024-11-20 07:28:01.158774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.890 [2024-11-20 07:28:01.159288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.890 [2024-11-20 07:28:01.159303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:38.890 [2024-11-20 07:28:01.159308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:38.890 [2024-11-20 07:28:01.159457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:38.890 [2024-11-20 07:28:01.159606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.890 [2024-11-20 07:28:01.159612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.890 [2024-11-20 07:28:01.159617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.890 [2024-11-20 07:28:01.159622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.153 [2024-11-20 07:28:01.171466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.153 [2024-11-20 07:28:01.171871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.153 [2024-11-20 07:28:01.171884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.153 [2024-11-20 07:28:01.171889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.153 [2024-11-20 07:28:01.172037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.153 [2024-11-20 07:28:01.172190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.153 [2024-11-20 07:28:01.172196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.153 [2024-11-20 07:28:01.172201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.153 [2024-11-20 07:28:01.172206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.153 [2024-11-20 07:28:01.184050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.153 [2024-11-20 07:28:01.184555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.153 [2024-11-20 07:28:01.184568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.153 [2024-11-20 07:28:01.184577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.153 [2024-11-20 07:28:01.184725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.153 [2024-11-20 07:28:01.184873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.153 [2024-11-20 07:28:01.184878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.153 [2024-11-20 07:28:01.184883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.153 [2024-11-20 07:28:01.184888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.153 [2024-11-20 07:28:01.196724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.153 [2024-11-20 07:28:01.197171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.153 [2024-11-20 07:28:01.197184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.153 [2024-11-20 07:28:01.197189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.153 [2024-11-20 07:28:01.197337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.153 [2024-11-20 07:28:01.197486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.153 [2024-11-20 07:28:01.197492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.153 [2024-11-20 07:28:01.197497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.153 [2024-11-20 07:28:01.197502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.153 [2024-11-20 07:28:01.209337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.153 [2024-11-20 07:28:01.209898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.153 [2024-11-20 07:28:01.209929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.153 [2024-11-20 07:28:01.209938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.153 [2024-11-20 07:28:01.210102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.153 [2024-11-20 07:28:01.210260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.153 [2024-11-20 07:28:01.210267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.153 [2024-11-20 07:28:01.210273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.153 [2024-11-20 07:28:01.210278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.153 [2024-11-20 07:28:01.221981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.153 [2024-11-20 07:28:01.222527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.154 [2024-11-20 07:28:01.222542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.154 [2024-11-20 07:28:01.222548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.154 [2024-11-20 07:28:01.222697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.154 [2024-11-20 07:28:01.222850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.154 [2024-11-20 07:28:01.222856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.154 [2024-11-20 07:28:01.222861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.154 [2024-11-20 07:28:01.222866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.154 [2024-11-20 07:28:01.234562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.154 [2024-11-20 07:28:01.235010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.154 [2024-11-20 07:28:01.235023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.154 [2024-11-20 07:28:01.235028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.154 [2024-11-20 07:28:01.235182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.154 [2024-11-20 07:28:01.235331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.154 [2024-11-20 07:28:01.235337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.154 [2024-11-20 07:28:01.235342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.154 [2024-11-20 07:28:01.235347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.154 [2024-11-20 07:28:01.247181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.154 [2024-11-20 07:28:01.247690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.154 [2024-11-20 07:28:01.247720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.154 [2024-11-20 07:28:01.247729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.154 [2024-11-20 07:28:01.247894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.154 [2024-11-20 07:28:01.248046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.154 [2024-11-20 07:28:01.248052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.154 [2024-11-20 07:28:01.248058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.154 [2024-11-20 07:28:01.248064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.154 [2024-11-20 07:28:01.259784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.154 [2024-11-20 07:28:01.260298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.154 [2024-11-20 07:28:01.260329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.154 [2024-11-20 07:28:01.260338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.154 [2024-11-20 07:28:01.260505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.154 [2024-11-20 07:28:01.260657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.154 [2024-11-20 07:28:01.260663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.154 [2024-11-20 07:28:01.260673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.154 [2024-11-20 07:28:01.260678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.154 [2024-11-20 07:28:01.272395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.154 [2024-11-20 07:28:01.272979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.154 [2024-11-20 07:28:01.273010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.154 [2024-11-20 07:28:01.273019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.154 [2024-11-20 07:28:01.273190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.154 [2024-11-20 07:28:01.273349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.154 [2024-11-20 07:28:01.273356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.154 [2024-11-20 07:28:01.273361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.154 [2024-11-20 07:28:01.273367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.154 [2024-11-20 07:28:01.285078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.154 [2024-11-20 07:28:01.285336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.154 [2024-11-20 07:28:01.285352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.154 [2024-11-20 07:28:01.285358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.154 [2024-11-20 07:28:01.285507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.154 [2024-11-20 07:28:01.285656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.154 [2024-11-20 07:28:01.285662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.154 [2024-11-20 07:28:01.285667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.154 [2024-11-20 07:28:01.285672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.154 [2024-11-20 07:28:01.297658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.154 [2024-11-20 07:28:01.298118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.154 [2024-11-20 07:28:01.298148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.154 [2024-11-20 07:28:01.298157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.154 [2024-11-20 07:28:01.298330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.154 [2024-11-20 07:28:01.298482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.154 [2024-11-20 07:28:01.298489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.154 [2024-11-20 07:28:01.298494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.154 [2024-11-20 07:28:01.298500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.154 [2024-11-20 07:28:01.310271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.154 [2024-11-20 07:28:01.310796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.154 [2024-11-20 07:28:01.310810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.154 [2024-11-20 07:28:01.310816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.154 [2024-11-20 07:28:01.310965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.154 [2024-11-20 07:28:01.311113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.154 [2024-11-20 07:28:01.311119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.154 [2024-11-20 07:28:01.311125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.154 [2024-11-20 07:28:01.311130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.154 [2024-11-20 07:28:01.322983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.154 [2024-11-20 07:28:01.323501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.154 [2024-11-20 07:28:01.323532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.154 [2024-11-20 07:28:01.323541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.154 [2024-11-20 07:28:01.323706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.154 [2024-11-20 07:28:01.323858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.154 [2024-11-20 07:28:01.323864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.154 [2024-11-20 07:28:01.323870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.154 [2024-11-20 07:28:01.323876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.154 [2024-11-20 07:28:01.335584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.154 [2024-11-20 07:28:01.336112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.154 [2024-11-20 07:28:01.336142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.154 [2024-11-20 07:28:01.336151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.154 [2024-11-20 07:28:01.336325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.154 [2024-11-20 07:28:01.336477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.154 [2024-11-20 07:28:01.336484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.154 [2024-11-20 07:28:01.336491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.154 [2024-11-20 07:28:01.336497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.155 [2024-11-20 07:28:01.348204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.155 [2024-11-20 07:28:01.348665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.155 [2024-11-20 07:28:01.348695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.155 [2024-11-20 07:28:01.348707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.155 [2024-11-20 07:28:01.348872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.155 [2024-11-20 07:28:01.349023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.155 [2024-11-20 07:28:01.349030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.155 [2024-11-20 07:28:01.349035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.155 [2024-11-20 07:28:01.349041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.155 [2024-11-20 07:28:01.360896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.155 [2024-11-20 07:28:01.361469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.155 [2024-11-20 07:28:01.361500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.155 [2024-11-20 07:28:01.361508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.155 [2024-11-20 07:28:01.361676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.155 [2024-11-20 07:28:01.361827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.155 [2024-11-20 07:28:01.361834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.155 [2024-11-20 07:28:01.361840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.155 [2024-11-20 07:28:01.361845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.155 [2024-11-20 07:28:01.373560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.155 [2024-11-20 07:28:01.374028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.155 [2024-11-20 07:28:01.374043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.155 [2024-11-20 07:28:01.374049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.155 [2024-11-20 07:28:01.374202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.155 [2024-11-20 07:28:01.374359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.155 [2024-11-20 07:28:01.374365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.155 [2024-11-20 07:28:01.374371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.155 [2024-11-20 07:28:01.374375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.155 [2024-11-20 07:28:01.386208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.155 [2024-11-20 07:28:01.386688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.155 [2024-11-20 07:28:01.386700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.155 [2024-11-20 07:28:01.386706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.155 [2024-11-20 07:28:01.386854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.155 [2024-11-20 07:28:01.387006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.155 [2024-11-20 07:28:01.387012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.155 [2024-11-20 07:28:01.387017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.155 [2024-11-20 07:28:01.387022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.155 [2024-11-20 07:28:01.398857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.155 [2024-11-20 07:28:01.399435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.155 [2024-11-20 07:28:01.399465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.155 [2024-11-20 07:28:01.399474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.155 [2024-11-20 07:28:01.399638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.155 [2024-11-20 07:28:01.399790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.155 [2024-11-20 07:28:01.399796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.155 [2024-11-20 07:28:01.399802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.155 [2024-11-20 07:28:01.399808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.155 [2024-11-20 07:28:01.411513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.155 [2024-11-20 07:28:01.412038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.155 [2024-11-20 07:28:01.412068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.155 [2024-11-20 07:28:01.412077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.155 [2024-11-20 07:28:01.412247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.155 [2024-11-20 07:28:01.412400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.155 [2024-11-20 07:28:01.412406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.155 [2024-11-20 07:28:01.412412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.155 [2024-11-20 07:28:01.412418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.155 [2024-11-20 07:28:01.424127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.155 [2024-11-20 07:28:01.424588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.155 [2024-11-20 07:28:01.424602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.155 [2024-11-20 07:28:01.424608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.155 [2024-11-20 07:28:01.424757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.155 [2024-11-20 07:28:01.424905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.155 [2024-11-20 07:28:01.424911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.155 [2024-11-20 07:28:01.424916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.155 [2024-11-20 07:28:01.424925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.417 [2024-11-20 07:28:01.436766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.417 [2024-11-20 07:28:01.437173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-11-20 07:28:01.437187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.417 [2024-11-20 07:28:01.437192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.417 [2024-11-20 07:28:01.437340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.417 [2024-11-20 07:28:01.437489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.417 [2024-11-20 07:28:01.437494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.417 [2024-11-20 07:28:01.437499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.417 [2024-11-20 07:28:01.437504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.417 [2024-11-20 07:28:01.449341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.417 [2024-11-20 07:28:01.449787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-11-20 07:28:01.449800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.417 [2024-11-20 07:28:01.449805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.417 [2024-11-20 07:28:01.449953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.417 [2024-11-20 07:28:01.450102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.417 [2024-11-20 07:28:01.450109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.417 [2024-11-20 07:28:01.450114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.417 [2024-11-20 07:28:01.450119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.417 [2024-11-20 07:28:01.461955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.417 [2024-11-20 07:28:01.462535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-11-20 07:28:01.462566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.417 [2024-11-20 07:28:01.462575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.417 [2024-11-20 07:28:01.462739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.417 [2024-11-20 07:28:01.462891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.417 [2024-11-20 07:28:01.462897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.417 [2024-11-20 07:28:01.462903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.417 [2024-11-20 07:28:01.462909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.417 [2024-11-20 07:28:01.474619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.417 [2024-11-20 07:28:01.475106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-11-20 07:28:01.475120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.417 [2024-11-20 07:28:01.475126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.417 [2024-11-20 07:28:01.475278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.417 [2024-11-20 07:28:01.475428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.417 [2024-11-20 07:28:01.475434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.417 [2024-11-20 07:28:01.475439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.417 [2024-11-20 07:28:01.475444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.417 [2024-11-20 07:28:01.487277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.417 [2024-11-20 07:28:01.487792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-11-20 07:28:01.487822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.417 [2024-11-20 07:28:01.487831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.417 [2024-11-20 07:28:01.487995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.417 [2024-11-20 07:28:01.488147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.418 [2024-11-20 07:28:01.488156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.418 [2024-11-20 07:28:01.488167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.418 [2024-11-20 07:28:01.488173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.418 [2024-11-20 07:28:01.499870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.418 [2024-11-20 07:28:01.500569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-11-20 07:28:01.500600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.418 [2024-11-20 07:28:01.500609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.418 [2024-11-20 07:28:01.500774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.418 [2024-11-20 07:28:01.500926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.418 [2024-11-20 07:28:01.500933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.418 [2024-11-20 07:28:01.500939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.418 [2024-11-20 07:28:01.500945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.418 [2024-11-20 07:28:01.512516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.418 [2024-11-20 07:28:01.513022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-11-20 07:28:01.513038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.418 [2024-11-20 07:28:01.513047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.418 [2024-11-20 07:28:01.513202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.418 [2024-11-20 07:28:01.513352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.418 [2024-11-20 07:28:01.513359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.418 [2024-11-20 07:28:01.513364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.418 [2024-11-20 07:28:01.513369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.418 [2024-11-20 07:28:01.525208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.418 [2024-11-20 07:28:01.525803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-11-20 07:28:01.525833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.418 [2024-11-20 07:28:01.525842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.418 [2024-11-20 07:28:01.526009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.418 [2024-11-20 07:28:01.526167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.418 [2024-11-20 07:28:01.526174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.418 [2024-11-20 07:28:01.526180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.418 [2024-11-20 07:28:01.526186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.418 [2024-11-20 07:28:01.537845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.418 [2024-11-20 07:28:01.538457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-11-20 07:28:01.538487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.418 [2024-11-20 07:28:01.538496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.418 [2024-11-20 07:28:01.538660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.418 [2024-11-20 07:28:01.538812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.418 [2024-11-20 07:28:01.538819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.418 [2024-11-20 07:28:01.538824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.418 [2024-11-20 07:28:01.538830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.418 [2024-11-20 07:28:01.550535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.418 [2024-11-20 07:28:01.551010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-11-20 07:28:01.551025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.418 [2024-11-20 07:28:01.551030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.418 [2024-11-20 07:28:01.551183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.418 [2024-11-20 07:28:01.551333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.418 [2024-11-20 07:28:01.551342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.418 [2024-11-20 07:28:01.551348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.418 [2024-11-20 07:28:01.551352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.418 [2024-11-20 07:28:01.563183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.418 [2024-11-20 07:28:01.563736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-11-20 07:28:01.563767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.418 [2024-11-20 07:28:01.563776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.418 [2024-11-20 07:28:01.563940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.418 [2024-11-20 07:28:01.564092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.418 [2024-11-20 07:28:01.564099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.418 [2024-11-20 07:28:01.564104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.418 [2024-11-20 07:28:01.564110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.418 [2024-11-20 07:28:01.575826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.418 [2024-11-20 07:28:01.576173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-11-20 07:28:01.576188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.418 [2024-11-20 07:28:01.576193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.418 [2024-11-20 07:28:01.576343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.418 [2024-11-20 07:28:01.576491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.418 [2024-11-20 07:28:01.576497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.418 [2024-11-20 07:28:01.576502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.418 [2024-11-20 07:28:01.576507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.418 [2024-11-20 07:28:01.588481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.418 [2024-11-20 07:28:01.588975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-11-20 07:28:01.589005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.418 [2024-11-20 07:28:01.589014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.418 [2024-11-20 07:28:01.589186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.418 [2024-11-20 07:28:01.589339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.418 [2024-11-20 07:28:01.589345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.418 [2024-11-20 07:28:01.589351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.418 [2024-11-20 07:28:01.589360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.419 [2024-11-20 07:28:01.601065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.419 [2024-11-20 07:28:01.601669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-11-20 07:28:01.601699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.419 [2024-11-20 07:28:01.601708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.419 [2024-11-20 07:28:01.601872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.419 [2024-11-20 07:28:01.602024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.419 [2024-11-20 07:28:01.602030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.419 [2024-11-20 07:28:01.602035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.419 [2024-11-20 07:28:01.602041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.419 [2024-11-20 07:28:01.613756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.419 [2024-11-20 07:28:01.614258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-11-20 07:28:01.614288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.419 [2024-11-20 07:28:01.614297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.419 [2024-11-20 07:28:01.614464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.419 [2024-11-20 07:28:01.614616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.419 [2024-11-20 07:28:01.614622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.419 [2024-11-20 07:28:01.614628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.419 [2024-11-20 07:28:01.614633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.419 [2024-11-20 07:28:01.626355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.419 [2024-11-20 07:28:01.626905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-11-20 07:28:01.626936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.419 [2024-11-20 07:28:01.626944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.419 [2024-11-20 07:28:01.627109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.419 [2024-11-20 07:28:01.627266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.419 [2024-11-20 07:28:01.627273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.419 [2024-11-20 07:28:01.627279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.419 [2024-11-20 07:28:01.627285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.419 [2024-11-20 07:28:01.638993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.419 [2024-11-20 07:28:01.639460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-11-20 07:28:01.639474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.419 [2024-11-20 07:28:01.639479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.419 [2024-11-20 07:28:01.639628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.419 [2024-11-20 07:28:01.639776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.419 [2024-11-20 07:28:01.639783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.419 [2024-11-20 07:28:01.639788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.419 [2024-11-20 07:28:01.639792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.419 [2024-11-20 07:28:01.651662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.419 [2024-11-20 07:28:01.652124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-11-20 07:28:01.652138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.419 [2024-11-20 07:28:01.652143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.419 [2024-11-20 07:28:01.652297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.419 [2024-11-20 07:28:01.652446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.419 [2024-11-20 07:28:01.652452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.419 [2024-11-20 07:28:01.652457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.419 [2024-11-20 07:28:01.652461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.419 [2024-11-20 07:28:01.664311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.419 [2024-11-20 07:28:01.664757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-11-20 07:28:01.664770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.419 [2024-11-20 07:28:01.664775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.419 [2024-11-20 07:28:01.664923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.419 [2024-11-20 07:28:01.665071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.419 [2024-11-20 07:28:01.665076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.419 [2024-11-20 07:28:01.665081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.419 [2024-11-20 07:28:01.665086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.419 [2024-11-20 07:28:01.676948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.419 [2024-11-20 07:28:01.677412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-11-20 07:28:01.677425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.419 [2024-11-20 07:28:01.677430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.419 [2024-11-20 07:28:01.677582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.419 [2024-11-20 07:28:01.677730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.419 [2024-11-20 07:28:01.677736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.419 [2024-11-20 07:28:01.677741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.419 [2024-11-20 07:28:01.677746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.419 [2024-11-20 07:28:01.689694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.681 [2024-11-20 07:28:01.690611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.681 [2024-11-20 07:28:01.690630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.681 [2024-11-20 07:28:01.690637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.681 [2024-11-20 07:28:01.690792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.681 [2024-11-20 07:28:01.690941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.681 [2024-11-20 07:28:01.690948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.681 [2024-11-20 07:28:01.690953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.681 [2024-11-20 07:28:01.690958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.681 [2024-11-20 07:28:01.702268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.681 [2024-11-20 07:28:01.702842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.681 [2024-11-20 07:28:01.702873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.681 [2024-11-20 07:28:01.702881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.681 [2024-11-20 07:28:01.703046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.681 [2024-11-20 07:28:01.703205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.681 [2024-11-20 07:28:01.703212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.681 [2024-11-20 07:28:01.703218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.681 [2024-11-20 07:28:01.703223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.681 [2024-11-20 07:28:01.714936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.681 [2024-11-20 07:28:01.715403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.681 [2024-11-20 07:28:01.715418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.681 [2024-11-20 07:28:01.715424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.681 [2024-11-20 07:28:01.715573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.681 [2024-11-20 07:28:01.715721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.681 [2024-11-20 07:28:01.715731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.681 [2024-11-20 07:28:01.715736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.681 [2024-11-20 07:28:01.715741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.681 [2024-11-20 07:28:01.727609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.681 [2024-11-20 07:28:01.728126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.681 [2024-11-20 07:28:01.728156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.681 [2024-11-20 07:28:01.728173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.681 [2024-11-20 07:28:01.728338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.681 [2024-11-20 07:28:01.728489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.681 [2024-11-20 07:28:01.728496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.681 [2024-11-20 07:28:01.728501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.681 [2024-11-20 07:28:01.728507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.681 [2024-11-20 07:28:01.740225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.681 [2024-11-20 07:28:01.740688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.681 [2024-11-20 07:28:01.740702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.681 [2024-11-20 07:28:01.740708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.681 [2024-11-20 07:28:01.740857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.681 [2024-11-20 07:28:01.741005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.681 [2024-11-20 07:28:01.741011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.681 [2024-11-20 07:28:01.741016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.681 [2024-11-20 07:28:01.741021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.681 [2024-11-20 07:28:01.752874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.681 [2024-11-20 07:28:01.753208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.681 [2024-11-20 07:28:01.753222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.681 [2024-11-20 07:28:01.753227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.681 [2024-11-20 07:28:01.753376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.681 [2024-11-20 07:28:01.753524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.681 [2024-11-20 07:28:01.753530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.682 [2024-11-20 07:28:01.753535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.682 [2024-11-20 07:28:01.753543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.682 [2024-11-20 07:28:01.765538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.682 [2024-11-20 07:28:01.765985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.682 [2024-11-20 07:28:01.765997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.682 [2024-11-20 07:28:01.766002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.682 [2024-11-20 07:28:01.766150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.682 [2024-11-20 07:28:01.766304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.682 [2024-11-20 07:28:01.766310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.682 [2024-11-20 07:28:01.766316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.682 [2024-11-20 07:28:01.766321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.682 [2024-11-20 07:28:01.778181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.682 [2024-11-20 07:28:01.778647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.682 [2024-11-20 07:28:01.778659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.682 [2024-11-20 07:28:01.778664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.682 [2024-11-20 07:28:01.778812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.682 [2024-11-20 07:28:01.778960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.682 [2024-11-20 07:28:01.778966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.682 [2024-11-20 07:28:01.778971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.682 [2024-11-20 07:28:01.778976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.682 [2024-11-20 07:28:01.790820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.682 [2024-11-20 07:28:01.791240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.682 [2024-11-20 07:28:01.791270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.682 [2024-11-20 07:28:01.791279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.682 [2024-11-20 07:28:01.791446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.682 [2024-11-20 07:28:01.791598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.682 [2024-11-20 07:28:01.791604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.682 [2024-11-20 07:28:01.791610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.682 [2024-11-20 07:28:01.791616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.682 [2024-11-20 07:28:01.803454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.682 [2024-11-20 07:28:01.803841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.682 [2024-11-20 07:28:01.803860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.682 [2024-11-20 07:28:01.803866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.682 [2024-11-20 07:28:01.804015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.682 [2024-11-20 07:28:01.804169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.682 [2024-11-20 07:28:01.804176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.682 [2024-11-20 07:28:01.804181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.682 [2024-11-20 07:28:01.804186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.682 [2024-11-20 07:28:01.816153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.682 [2024-11-20 07:28:01.816606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.682 [2024-11-20 07:28:01.816619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.682 [2024-11-20 07:28:01.816624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.682 [2024-11-20 07:28:01.816772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.682 [2024-11-20 07:28:01.816921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.682 [2024-11-20 07:28:01.816927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.682 [2024-11-20 07:28:01.816932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.682 [2024-11-20 07:28:01.816936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.682 [2024-11-20 07:28:01.828807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.682 [2024-11-20 07:28:01.829333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.682 [2024-11-20 07:28:01.829364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.682 [2024-11-20 07:28:01.829372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.682 [2024-11-20 07:28:01.829537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.682 [2024-11-20 07:28:01.829689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.682 [2024-11-20 07:28:01.829695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.682 [2024-11-20 07:28:01.829701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.682 [2024-11-20 07:28:01.829707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.682 [2024-11-20 07:28:01.841427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.682 [2024-11-20 07:28:01.841934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.682 [2024-11-20 07:28:01.841964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.682 [2024-11-20 07:28:01.841972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.682 [2024-11-20 07:28:01.842140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.682 [2024-11-20 07:28:01.842299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.682 [2024-11-20 07:28:01.842307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.682 [2024-11-20 07:28:01.842313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.682 [2024-11-20 07:28:01.842319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.682 [2024-11-20 07:28:01.854020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.682 [2024-11-20 07:28:01.854579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.682 [2024-11-20 07:28:01.854610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.682 [2024-11-20 07:28:01.854619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.682 [2024-11-20 07:28:01.854783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.683 [2024-11-20 07:28:01.854935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.683 [2024-11-20 07:28:01.854941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.683 [2024-11-20 07:28:01.854947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.683 [2024-11-20 07:28:01.854952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.683 [2024-11-20 07:28:01.866659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.683 [2024-11-20 07:28:01.867071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-20 07:28:01.867086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.683 [2024-11-20 07:28:01.867091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.683 [2024-11-20 07:28:01.867245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.683 [2024-11-20 07:28:01.867394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.683 [2024-11-20 07:28:01.867400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.683 [2024-11-20 07:28:01.867405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.683 [2024-11-20 07:28:01.867410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.683 [2024-11-20 07:28:01.879245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.683 [2024-11-20 07:28:01.879646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-20 07:28:01.879659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.683 [2024-11-20 07:28:01.879664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.683 [2024-11-20 07:28:01.879813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.683 [2024-11-20 07:28:01.879961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.683 [2024-11-20 07:28:01.879970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.683 [2024-11-20 07:28:01.879975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.683 [2024-11-20 07:28:01.879980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.683 [2024-11-20 07:28:01.891834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.683 [2024-11-20 07:28:01.892360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-20 07:28:01.892390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.683 [2024-11-20 07:28:01.892398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.683 [2024-11-20 07:28:01.892563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.683 [2024-11-20 07:28:01.892715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.683 [2024-11-20 07:28:01.892722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.683 [2024-11-20 07:28:01.892727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.683 [2024-11-20 07:28:01.892733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.683 [2024-11-20 07:28:01.904435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.683 [2024-11-20 07:28:01.904900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-20 07:28:01.904915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.683 [2024-11-20 07:28:01.904920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.683 [2024-11-20 07:28:01.905069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.683 [2024-11-20 07:28:01.905223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.683 [2024-11-20 07:28:01.905229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.683 [2024-11-20 07:28:01.905234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.683 [2024-11-20 07:28:01.905239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.683 [2024-11-20 07:28:01.917070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.683 [2024-11-20 07:28:01.917529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-20 07:28:01.917543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.683 [2024-11-20 07:28:01.917549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.683 [2024-11-20 07:28:01.917697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.683 [2024-11-20 07:28:01.917845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.683 [2024-11-20 07:28:01.917852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.683 [2024-11-20 07:28:01.917859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.683 [2024-11-20 07:28:01.917864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.683 [2024-11-20 07:28:01.929733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.683 [2024-11-20 07:28:01.930184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-20 07:28:01.930200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.683 [2024-11-20 07:28:01.930205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.683 [2024-11-20 07:28:01.930354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.683 [2024-11-20 07:28:01.930503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.683 [2024-11-20 07:28:01.930509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.683 [2024-11-20 07:28:01.930515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.683 [2024-11-20 07:28:01.930519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.683 [2024-11-20 07:28:01.942369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.683 [2024-11-20 07:28:01.942821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.683 [2024-11-20 07:28:01.942833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.683 [2024-11-20 07:28:01.942839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.683 [2024-11-20 07:28:01.942987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.683 [2024-11-20 07:28:01.943135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.683 [2024-11-20 07:28:01.943141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.683 [2024-11-20 07:28:01.943146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.683 [2024-11-20 07:28:01.943151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.945 [2024-11-20 07:28:01.954993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.945 [2024-11-20 07:28:01.955480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.945 [2024-11-20 07:28:01.955493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.945 [2024-11-20 07:28:01.955498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.945 [2024-11-20 07:28:01.955646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.945 [2024-11-20 07:28:01.955795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.945 [2024-11-20 07:28:01.955800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.945 [2024-11-20 07:28:01.955805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.945 [2024-11-20 07:28:01.955810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.945 [2024-11-20 07:28:01.967651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.945 [2024-11-20 07:28:01.968203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.945 [2024-11-20 07:28:01.968237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.945 [2024-11-20 07:28:01.968246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.945 [2024-11-20 07:28:01.968413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.945 [2024-11-20 07:28:01.968565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.945 [2024-11-20 07:28:01.968571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.945 [2024-11-20 07:28:01.968577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.945 [2024-11-20 07:28:01.968583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.945 5684.60 IOPS, 22.21 MiB/s [2024-11-20T06:28:02.223Z] [2024-11-20 07:28:01.980308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.945 [2024-11-20 07:28:01.980848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.945 [2024-11-20 07:28:01.980878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.945 [2024-11-20 07:28:01.980887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.945 [2024-11-20 07:28:01.981052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.945 [2024-11-20 07:28:01.981211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.945 [2024-11-20 07:28:01.981218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.945 [2024-11-20 07:28:01.981224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.945 [2024-11-20 07:28:01.981229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.946 [2024-11-20 07:28:01.992948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.946 [2024-11-20 07:28:01.993480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.946 [2024-11-20 07:28:01.993510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.946 [2024-11-20 07:28:01.993519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.946 [2024-11-20 07:28:01.993683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.946 [2024-11-20 07:28:01.993835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.946 [2024-11-20 07:28:01.993842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.946 [2024-11-20 07:28:01.993848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.946 [2024-11-20 07:28:01.993853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.946 [2024-11-20 07:28:02.005576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.946 [2024-11-20 07:28:02.006195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.946 [2024-11-20 07:28:02.006225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.946 [2024-11-20 07:28:02.006235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.946 [2024-11-20 07:28:02.006405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.946 [2024-11-20 07:28:02.006557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.946 [2024-11-20 07:28:02.006564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.946 [2024-11-20 07:28:02.006570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.946 [2024-11-20 07:28:02.006575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.946 [2024-11-20 07:28:02.018281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.946 [2024-11-20 07:28:02.018791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.946 [2024-11-20 07:28:02.018822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.946 [2024-11-20 07:28:02.018831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.946 [2024-11-20 07:28:02.018996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.946 [2024-11-20 07:28:02.019147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.946 [2024-11-20 07:28:02.019154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.946 [2024-11-20 07:28:02.019166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.946 [2024-11-20 07:28:02.019173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.946 [2024-11-20 07:28:02.030879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.946 [2024-11-20 07:28:02.031297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.946 [2024-11-20 07:28:02.031312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.946 [2024-11-20 07:28:02.031317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.946 [2024-11-20 07:28:02.031466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.946 [2024-11-20 07:28:02.031614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.946 [2024-11-20 07:28:02.031620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.946 [2024-11-20 07:28:02.031625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.946 [2024-11-20 07:28:02.031630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.946 [2024-11-20 07:28:02.043467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.946 [2024-11-20 07:28:02.043920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.946 [2024-11-20 07:28:02.043933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.946 [2024-11-20 07:28:02.043938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.946 [2024-11-20 07:28:02.044087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.946 [2024-11-20 07:28:02.044239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.946 [2024-11-20 07:28:02.044248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.946 [2024-11-20 07:28:02.044254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.946 [2024-11-20 07:28:02.044259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.946 [2024-11-20 07:28:02.056103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.946 [2024-11-20 07:28:02.056646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.946 [2024-11-20 07:28:02.056677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.946 [2024-11-20 07:28:02.056686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.946 [2024-11-20 07:28:02.056851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.946 [2024-11-20 07:28:02.057003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.946 [2024-11-20 07:28:02.057009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.946 [2024-11-20 07:28:02.057015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.946 [2024-11-20 07:28:02.057020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.946 [2024-11-20 07:28:02.068735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.946 [2024-11-20 07:28:02.069289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.946 [2024-11-20 07:28:02.069319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.946 [2024-11-20 07:28:02.069328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.946 [2024-11-20 07:28:02.069492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.946 [2024-11-20 07:28:02.069644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.946 [2024-11-20 07:28:02.069651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.946 [2024-11-20 07:28:02.069657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.946 [2024-11-20 07:28:02.069663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.946 [2024-11-20 07:28:02.081384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.946 [2024-11-20 07:28:02.081930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.946 [2024-11-20 07:28:02.081960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.946 [2024-11-20 07:28:02.081969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.946 [2024-11-20 07:28:02.082136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.946 [2024-11-20 07:28:02.082296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.946 [2024-11-20 07:28:02.082303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.946 [2024-11-20 07:28:02.082309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.946 [2024-11-20 07:28:02.082315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.946 [2024-11-20 07:28:02.094015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.946 [2024-11-20 07:28:02.094560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.946 [2024-11-20 07:28:02.094591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.946 [2024-11-20 07:28:02.094600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.946 [2024-11-20 07:28:02.094764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.946 [2024-11-20 07:28:02.094916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.946 [2024-11-20 07:28:02.094922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.946 [2024-11-20 07:28:02.094928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.946 [2024-11-20 07:28:02.094934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.946 [2024-11-20 07:28:02.106635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.946 [2024-11-20 07:28:02.107099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.946 [2024-11-20 07:28:02.107114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.946 [2024-11-20 07:28:02.107120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.946 [2024-11-20 07:28:02.107273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.947 [2024-11-20 07:28:02.107423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.947 [2024-11-20 07:28:02.107429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.947 [2024-11-20 07:28:02.107434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.947 [2024-11-20 07:28:02.107439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.947 [2024-11-20 07:28:02.119261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.947 [2024-11-20 07:28:02.119710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.947 [2024-11-20 07:28:02.119722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.947 [2024-11-20 07:28:02.119728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.947 [2024-11-20 07:28:02.119876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.947 [2024-11-20 07:28:02.120024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.947 [2024-11-20 07:28:02.120030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.947 [2024-11-20 07:28:02.120035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.947 [2024-11-20 07:28:02.120040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.947 [2024-11-20 07:28:02.131879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.947 [2024-11-20 07:28:02.132426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.947 [2024-11-20 07:28:02.132460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.947 [2024-11-20 07:28:02.132468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.947 [2024-11-20 07:28:02.132633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.947 [2024-11-20 07:28:02.132784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.947 [2024-11-20 07:28:02.132791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.947 [2024-11-20 07:28:02.132797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.947 [2024-11-20 07:28:02.132802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.947 [2024-11-20 07:28:02.144562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.947 [2024-11-20 07:28:02.145020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.947 [2024-11-20 07:28:02.145035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.947 [2024-11-20 07:28:02.145040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.947 [2024-11-20 07:28:02.145194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.947 [2024-11-20 07:28:02.145343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.947 [2024-11-20 07:28:02.145349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.947 [2024-11-20 07:28:02.145354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.947 [2024-11-20 07:28:02.145359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.947 [2024-11-20 07:28:02.157191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.947 [2024-11-20 07:28:02.157669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.947 [2024-11-20 07:28:02.157682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.947 [2024-11-20 07:28:02.157687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.947 [2024-11-20 07:28:02.157835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.947 [2024-11-20 07:28:02.157984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.947 [2024-11-20 07:28:02.157989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.947 [2024-11-20 07:28:02.157994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.947 [2024-11-20 07:28:02.157999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.947 [2024-11-20 07:28:02.169831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.947 [2024-11-20 07:28:02.170385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.947 [2024-11-20 07:28:02.170415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.947 [2024-11-20 07:28:02.170424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.947 [2024-11-20 07:28:02.170592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.947 [2024-11-20 07:28:02.170744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.947 [2024-11-20 07:28:02.170750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.947 [2024-11-20 07:28:02.170756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.947 [2024-11-20 07:28:02.170761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.947 [2024-11-20 07:28:02.182473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.947 [2024-11-20 07:28:02.183018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.947 [2024-11-20 07:28:02.183049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.947 [2024-11-20 07:28:02.183057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.947 [2024-11-20 07:28:02.183229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.947 [2024-11-20 07:28:02.183381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.947 [2024-11-20 07:28:02.183388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.947 [2024-11-20 07:28:02.183393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.947 [2024-11-20 07:28:02.183399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.947 [2024-11-20 07:28:02.195094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.947 [2024-11-20 07:28:02.195663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.947 [2024-11-20 07:28:02.195693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.947 [2024-11-20 07:28:02.195702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.947 [2024-11-20 07:28:02.195866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.947 [2024-11-20 07:28:02.196018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.947 [2024-11-20 07:28:02.196025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.947 [2024-11-20 07:28:02.196031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.947 [2024-11-20 07:28:02.196036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.947 [2024-11-20 07:28:02.207769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.947 [2024-11-20 07:28:02.208330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.947 [2024-11-20 07:28:02.208361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:39.947 [2024-11-20 07:28:02.208370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:39.947 [2024-11-20 07:28:02.208534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:39.947 [2024-11-20 07:28:02.208686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.947 [2024-11-20 07:28:02.208692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.947 [2024-11-20 07:28:02.208701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.947 [2024-11-20 07:28:02.208707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.210 [2024-11-20 07:28:02.220419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.210 [2024-11-20 07:28:02.220876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:28:02.220891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.210 [2024-11-20 07:28:02.220897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.210 [2024-11-20 07:28:02.221045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.210 [2024-11-20 07:28:02.221199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.210 [2024-11-20 07:28:02.221206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.210 [2024-11-20 07:28:02.221211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.210 [2024-11-20 07:28:02.221216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.210 [2024-11-20 07:28:02.233061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.210 [2024-11-20 07:28:02.233610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:28:02.233641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.210 [2024-11-20 07:28:02.233649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.210 [2024-11-20 07:28:02.233814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.210 [2024-11-20 07:28:02.233966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.210 [2024-11-20 07:28:02.233972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.210 [2024-11-20 07:28:02.233978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.210 [2024-11-20 07:28:02.233983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.210 [2024-11-20 07:28:02.245684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.210 [2024-11-20 07:28:02.246239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:28:02.246270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.210 [2024-11-20 07:28:02.246278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.210 [2024-11-20 07:28:02.246445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.210 [2024-11-20 07:28:02.246597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.210 [2024-11-20 07:28:02.246604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.210 [2024-11-20 07:28:02.246610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.210 [2024-11-20 07:28:02.246615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.210 [2024-11-20 07:28:02.258325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.210 [2024-11-20 07:28:02.258799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:28:02.258814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.210 [2024-11-20 07:28:02.258820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.210 [2024-11-20 07:28:02.258969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.210 [2024-11-20 07:28:02.259117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.210 [2024-11-20 07:28:02.259123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.210 [2024-11-20 07:28:02.259128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.210 [2024-11-20 07:28:02.259133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.210 [2024-11-20 07:28:02.270983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.210 [2024-11-20 07:28:02.271391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:28:02.271405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.210 [2024-11-20 07:28:02.271410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.210 [2024-11-20 07:28:02.271558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.210 [2024-11-20 07:28:02.271707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.210 [2024-11-20 07:28:02.271713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.210 [2024-11-20 07:28:02.271717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.210 [2024-11-20 07:28:02.271722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.210 [2024-11-20 07:28:02.283570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.210 [2024-11-20 07:28:02.283974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:28:02.283987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.210 [2024-11-20 07:28:02.283992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.210 [2024-11-20 07:28:02.284140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.210 [2024-11-20 07:28:02.284294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.210 [2024-11-20 07:28:02.284301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.210 [2024-11-20 07:28:02.284306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.210 [2024-11-20 07:28:02.284311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.210 [2024-11-20 07:28:02.296138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.211 [2024-11-20 07:28:02.296590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:28:02.296603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.211 [2024-11-20 07:28:02.296616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.211 [2024-11-20 07:28:02.296764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.211 [2024-11-20 07:28:02.296913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.211 [2024-11-20 07:28:02.296918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.211 [2024-11-20 07:28:02.296923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.211 [2024-11-20 07:28:02.296928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.211 [2024-11-20 07:28:02.308760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.211 [2024-11-20 07:28:02.309100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:28:02.309112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.211 [2024-11-20 07:28:02.309117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.211 [2024-11-20 07:28:02.309269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.211 [2024-11-20 07:28:02.309418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.211 [2024-11-20 07:28:02.309425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.211 [2024-11-20 07:28:02.309430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.211 [2024-11-20 07:28:02.309434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.211 [2024-11-20 07:28:02.321403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.211 [2024-11-20 07:28:02.321939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:28:02.321969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.211 [2024-11-20 07:28:02.321978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.211 [2024-11-20 07:28:02.322142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.211 [2024-11-20 07:28:02.322307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.211 [2024-11-20 07:28:02.322314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.211 [2024-11-20 07:28:02.322320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.211 [2024-11-20 07:28:02.322326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.211 [2024-11-20 07:28:02.334023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.211 [2024-11-20 07:28:02.334498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:28:02.334512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.211 [2024-11-20 07:28:02.334518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.211 [2024-11-20 07:28:02.334667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.211 [2024-11-20 07:28:02.334819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.211 [2024-11-20 07:28:02.334825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.211 [2024-11-20 07:28:02.334831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.211 [2024-11-20 07:28:02.334835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.211 [2024-11-20 07:28:02.346663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.211 [2024-11-20 07:28:02.347204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:28:02.347234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.211 [2024-11-20 07:28:02.347243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.211 [2024-11-20 07:28:02.347407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.211 [2024-11-20 07:28:02.347559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.211 [2024-11-20 07:28:02.347565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.211 [2024-11-20 07:28:02.347571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.211 [2024-11-20 07:28:02.347576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.211 [2024-11-20 07:28:02.359283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.211 [2024-11-20 07:28:02.359786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:28:02.359817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.211 [2024-11-20 07:28:02.359825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.211 [2024-11-20 07:28:02.359990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.211 [2024-11-20 07:28:02.360142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.211 [2024-11-20 07:28:02.360148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.211 [2024-11-20 07:28:02.360154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.211 [2024-11-20 07:28:02.360166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.211 [2024-11-20 07:28:02.371862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.211 [2024-11-20 07:28:02.372424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:28:02.372454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.211 [2024-11-20 07:28:02.372463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.211 [2024-11-20 07:28:02.372628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.211 [2024-11-20 07:28:02.372779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.211 [2024-11-20 07:28:02.372786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.211 [2024-11-20 07:28:02.372794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.211 [2024-11-20 07:28:02.372800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.211 [2024-11-20 07:28:02.384515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.211 [2024-11-20 07:28:02.384973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:28:02.384987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.211 [2024-11-20 07:28:02.384993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.211 [2024-11-20 07:28:02.385142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.211 [2024-11-20 07:28:02.385296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.211 [2024-11-20 07:28:02.385302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.211 [2024-11-20 07:28:02.385308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.211 [2024-11-20 07:28:02.385313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.211 [2024-11-20 07:28:02.397166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.211 [2024-11-20 07:28:02.397660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:28:02.397690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.211 [2024-11-20 07:28:02.397698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.211 [2024-11-20 07:28:02.397863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.211 [2024-11-20 07:28:02.398014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.211 [2024-11-20 07:28:02.398021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.211 [2024-11-20 07:28:02.398026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.211 [2024-11-20 07:28:02.398032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.211 [2024-11-20 07:28:02.409734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.212 [2024-11-20 07:28:02.410205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:28:02.410227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.212 [2024-11-20 07:28:02.410233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.212 [2024-11-20 07:28:02.410387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.212 [2024-11-20 07:28:02.410537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.212 [2024-11-20 07:28:02.410543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.212 [2024-11-20 07:28:02.410548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.212 [2024-11-20 07:28:02.410553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.212 [2024-11-20 07:28:02.422402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.212 [2024-11-20 07:28:02.422950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:28:02.422980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.212 [2024-11-20 07:28:02.422988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.212 [2024-11-20 07:28:02.423153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.212 [2024-11-20 07:28:02.423313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.212 [2024-11-20 07:28:02.423320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.212 [2024-11-20 07:28:02.423325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.212 [2024-11-20 07:28:02.423331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.212 [2024-11-20 07:28:02.435029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.212 [2024-11-20 07:28:02.435560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:28:02.435591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.212 [2024-11-20 07:28:02.435599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.212 [2024-11-20 07:28:02.435764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.212 [2024-11-20 07:28:02.435915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.212 [2024-11-20 07:28:02.435922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.212 [2024-11-20 07:28:02.435927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.212 [2024-11-20 07:28:02.435933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.212 [2024-11-20 07:28:02.447632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.212 [2024-11-20 07:28:02.448184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:28:02.448214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.212 [2024-11-20 07:28:02.448223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.212 [2024-11-20 07:28:02.448389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.212 [2024-11-20 07:28:02.448541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.212 [2024-11-20 07:28:02.448547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.212 [2024-11-20 07:28:02.448553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.212 [2024-11-20 07:28:02.448559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.212 [2024-11-20 07:28:02.460270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.212 [2024-11-20 07:28:02.460820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:28:02.460851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.212 [2024-11-20 07:28:02.460863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.212 [2024-11-20 07:28:02.461027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.212 [2024-11-20 07:28:02.461188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.212 [2024-11-20 07:28:02.461196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.212 [2024-11-20 07:28:02.461201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.212 [2024-11-20 07:28:02.461208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3701569 Killed "${NVMF_APP[@]}" "$@" 00:29:40.212 07:28:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:40.212 07:28:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:40.212 07:28:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:40.212 07:28:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.212 07:28:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:40.212 [2024-11-20 07:28:02.472903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.212 [2024-11-20 07:28:02.473290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:28:02.473320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.212 [2024-11-20 07:28:02.473329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.212 [2024-11-20 07:28:02.473496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.212 [2024-11-20 07:28:02.473648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.212 [2024-11-20 07:28:02.473654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.212 [2024-11-20 07:28:02.473660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.212 [2024-11-20 07:28:02.473665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.212 07:28:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3703259 00:29:40.212 07:28:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3703259 00:29:40.212 07:28:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:40.212 07:28:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3703259 ']' 00:29:40.212 07:28:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.212 07:28:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:40.212 07:28:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.212 07:28:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:40.212 07:28:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:40.480 [2024-11-20 07:28:02.485517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.481 [2024-11-20 07:28:02.485915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-11-20 07:28:02.485934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.481 [2024-11-20 07:28:02.485939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.481 [2024-11-20 07:28:02.486088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.481 [2024-11-20 07:28:02.486242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.481 [2024-11-20 07:28:02.486249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.481 [2024-11-20 07:28:02.486254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.481 [2024-11-20 07:28:02.486259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.481 [2024-11-20 07:28:02.498089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.481 [2024-11-20 07:28:02.498634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-11-20 07:28:02.498663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.481 [2024-11-20 07:28:02.498672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.481 [2024-11-20 07:28:02.498837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.481 [2024-11-20 07:28:02.498988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.481 [2024-11-20 07:28:02.498997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.481 [2024-11-20 07:28:02.499002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.481 [2024-11-20 07:28:02.499008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.481 [2024-11-20 07:28:02.510738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.481 [2024-11-20 07:28:02.511374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-11-20 07:28:02.511405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.482 [2024-11-20 07:28:02.511413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.482 [2024-11-20 07:28:02.511578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.482 [2024-11-20 07:28:02.511730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.482 [2024-11-20 07:28:02.511737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.482 [2024-11-20 07:28:02.511742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.482 [2024-11-20 07:28:02.511748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.482 [2024-11-20 07:28:02.523324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.482 [2024-11-20 07:28:02.523784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-11-20 07:28:02.523814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.482 [2024-11-20 07:28:02.523823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.482 [2024-11-20 07:28:02.523995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.482 [2024-11-20 07:28:02.524147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.482 [2024-11-20 07:28:02.524154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.482 [2024-11-20 07:28:02.524166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.482 [2024-11-20 07:28:02.524172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.482 [2024-11-20 07:28:02.530806] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:29:40.482 [2024-11-20 07:28:02.530853] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.482 [2024-11-20 07:28:02.536007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.482 [2024-11-20 07:28:02.536510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-11-20 07:28:02.536525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.483 [2024-11-20 07:28:02.536531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.483 [2024-11-20 07:28:02.536680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.483 [2024-11-20 07:28:02.536829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.483 [2024-11-20 07:28:02.536835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.483 [2024-11-20 07:28:02.536840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.483 [2024-11-20 07:28:02.536846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.483 [2024-11-20 07:28:02.548680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.483 [2024-11-20 07:28:02.549148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-11-20 07:28:02.549164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.483 [2024-11-20 07:28:02.549170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.483 [2024-11-20 07:28:02.549319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.483 [2024-11-20 07:28:02.549467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.483 [2024-11-20 07:28:02.549473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.483 [2024-11-20 07:28:02.549478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.483 [2024-11-20 07:28:02.549483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.483 [2024-11-20 07:28:02.561312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.483 [2024-11-20 07:28:02.561751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-11-20 07:28:02.561763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.483 [2024-11-20 07:28:02.561769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.483 [2024-11-20 07:28:02.561920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.483 [2024-11-20 07:28:02.562069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.483 [2024-11-20 07:28:02.562075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.483 [2024-11-20 07:28:02.562080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.483 [2024-11-20 07:28:02.562085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.483 [2024-11-20 07:28:02.574007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.483 [2024-11-20 07:28:02.574509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-11-20 07:28:02.574540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.483 [2024-11-20 07:28:02.574549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.483 [2024-11-20 07:28:02.574714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.483 [2024-11-20 07:28:02.574866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.483 [2024-11-20 07:28:02.574872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.483 [2024-11-20 07:28:02.574878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.483 [2024-11-20 07:28:02.574884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.483 [2024-11-20 07:28:02.586610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.483 [2024-11-20 07:28:02.587038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-11-20 07:28:02.587068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.483 [2024-11-20 07:28:02.587077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.483 [2024-11-20 07:28:02.587250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.484 [2024-11-20 07:28:02.587403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.484 [2024-11-20 07:28:02.587409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.484 [2024-11-20 07:28:02.587415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.484 [2024-11-20 07:28:02.587421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.484 [2024-11-20 07:28:02.599264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.484 [2024-11-20 07:28:02.599835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-11-20 07:28:02.599865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.484 [2024-11-20 07:28:02.599874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.484 [2024-11-20 07:28:02.600039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.484 [2024-11-20 07:28:02.600198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.484 [2024-11-20 07:28:02.600210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.484 [2024-11-20 07:28:02.600216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.484 [2024-11-20 07:28:02.600221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.484 [2024-11-20 07:28:02.611923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.484 [2024-11-20 07:28:02.612534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-11-20 07:28:02.612565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.484 [2024-11-20 07:28:02.612574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.484 [2024-11-20 07:28:02.612740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.484 [2024-11-20 07:28:02.612891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.484 [2024-11-20 07:28:02.612898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.484 [2024-11-20 07:28:02.612904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.484 [2024-11-20 07:28:02.612910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.485 [2024-11-20 07:28:02.622315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:40.485 [2024-11-20 07:28:02.624625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.485 [2024-11-20 07:28:02.625057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-11-20 07:28:02.625072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.485 [2024-11-20 07:28:02.625078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.485 [2024-11-20 07:28:02.625231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.485 [2024-11-20 07:28:02.625381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.485 [2024-11-20 07:28:02.625388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.485 [2024-11-20 07:28:02.625393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.485 [2024-11-20 07:28:02.625398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.485 [2024-11-20 07:28:02.637247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.485 [2024-11-20 07:28:02.637610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-11-20 07:28:02.637622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.485 [2024-11-20 07:28:02.637628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.486 [2024-11-20 07:28:02.637777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.486 [2024-11-20 07:28:02.637925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.486 [2024-11-20 07:28:02.637931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.486 [2024-11-20 07:28:02.637940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.486 [2024-11-20 07:28:02.637946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.486 [2024-11-20 07:28:02.649820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.486 [2024-11-20 07:28:02.650194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-11-20 07:28:02.650210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.486 [2024-11-20 07:28:02.650216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.486 [2024-11-20 07:28:02.650365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.486 [2024-11-20 07:28:02.650514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.486 [2024-11-20 07:28:02.650521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.486 [2024-11-20 07:28:02.650526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.486 [2024-11-20 07:28:02.650531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.486 [2024-11-20 07:28:02.651612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.486 [2024-11-20 07:28:02.651635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.487 [2024-11-20 07:28:02.651642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.487 [2024-11-20 07:28:02.651647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.487 [2024-11-20 07:28:02.651652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.487 [2024-11-20 07:28:02.652669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:40.487 [2024-11-20 07:28:02.652819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.487 [2024-11-20 07:28:02.652821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:40.487 [2024-11-20 07:28:02.662513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.487 [2024-11-20 07:28:02.663116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-11-20 07:28:02.663150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.487 [2024-11-20 07:28:02.663166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.487 [2024-11-20 07:28:02.663341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.487 [2024-11-20 07:28:02.663493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.487 [2024-11-20 07:28:02.663500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.487 [2024-11-20 07:28:02.663506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.487 [2024-11-20 07:28:02.663512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.487 [2024-11-20 07:28:02.675232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.487 [2024-11-20 07:28:02.675809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-11-20 07:28:02.675840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.487 [2024-11-20 07:28:02.675849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.487 [2024-11-20 07:28:02.676023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.487 [2024-11-20 07:28:02.676181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.487 [2024-11-20 07:28:02.676189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.487 [2024-11-20 07:28:02.676195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.487 [2024-11-20 07:28:02.676201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.488 [2024-11-20 07:28:02.687925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.488 [2024-11-20 07:28:02.688462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-11-20 07:28:02.688493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.488 [2024-11-20 07:28:02.688502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.488 [2024-11-20 07:28:02.688669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.488 [2024-11-20 07:28:02.688821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.488 [2024-11-20 07:28:02.688828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.488 [2024-11-20 07:28:02.688834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.488 [2024-11-20 07:28:02.688840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.488 [2024-11-20 07:28:02.700552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.488 [2024-11-20 07:28:02.701022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-11-20 07:28:02.701036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.488 [2024-11-20 07:28:02.701042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.488 [2024-11-20 07:28:02.701197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.488 [2024-11-20 07:28:02.701347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.488 [2024-11-20 07:28:02.701353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.488 [2024-11-20 07:28:02.701359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.489 [2024-11-20 07:28:02.701364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.489 [2024-11-20 07:28:02.713199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.489 [2024-11-20 07:28:02.713718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-11-20 07:28:02.713749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.489 [2024-11-20 07:28:02.713758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.489 [2024-11-20 07:28:02.713922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.489 [2024-11-20 07:28:02.714074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.489 [2024-11-20 07:28:02.714086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.489 [2024-11-20 07:28:02.714091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.489 [2024-11-20 07:28:02.714097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.489 [2024-11-20 07:28:02.725819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.489 [2024-11-20 07:28:02.726290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-11-20 07:28:02.726305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.489 [2024-11-20 07:28:02.726311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.489 [2024-11-20 07:28:02.726460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.489 [2024-11-20 07:28:02.726609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.489 [2024-11-20 07:28:02.726615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.489 [2024-11-20 07:28:02.726620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.489 [2024-11-20 07:28:02.726625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.489 [2024-11-20 07:28:02.738461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.489 [2024-11-20 07:28:02.738927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-11-20 07:28:02.738940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.490 [2024-11-20 07:28:02.738945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.490 [2024-11-20 07:28:02.739094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.490 [2024-11-20 07:28:02.739247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.490 [2024-11-20 07:28:02.739254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.490 [2024-11-20 07:28:02.739259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.490 [2024-11-20 07:28:02.739264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.490 [2024-11-20 07:28:02.751099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.490 [2024-11-20 07:28:02.751693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-11-20 07:28:02.751724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.490 [2024-11-20 07:28:02.751733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.490 [2024-11-20 07:28:02.751898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.490 [2024-11-20 07:28:02.752050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.490 [2024-11-20 07:28:02.752056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.490 [2024-11-20 07:28:02.752062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.491 [2024-11-20 07:28:02.752071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.758 [2024-11-20 07:28:02.763775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.758 [2024-11-20 07:28:02.764133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.758 [2024-11-20 07:28:02.764148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.758 [2024-11-20 07:28:02.764154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.758 [2024-11-20 07:28:02.764309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.758 [2024-11-20 07:28:02.764458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.758 [2024-11-20 07:28:02.764464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.758 [2024-11-20 07:28:02.764469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.758 [2024-11-20 07:28:02.764473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.758 [2024-11-20 07:28:02.776459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.758 [2024-11-20 07:28:02.776926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.758 [2024-11-20 07:28:02.776956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.758 [2024-11-20 07:28:02.776965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.758 [2024-11-20 07:28:02.777130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.758 [2024-11-20 07:28:02.777288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.758 [2024-11-20 07:28:02.777296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.758 [2024-11-20 07:28:02.777301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.758 [2024-11-20 07:28:02.777307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.758 [2024-11-20 07:28:02.789168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.758 [2024-11-20 07:28:02.789751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.758 [2024-11-20 07:28:02.789782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.758 [2024-11-20 07:28:02.789791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.758 [2024-11-20 07:28:02.789955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.758 [2024-11-20 07:28:02.790107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.758 [2024-11-20 07:28:02.790114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.758 [2024-11-20 07:28:02.790119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.758 [2024-11-20 07:28:02.790125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.758 [2024-11-20 07:28:02.801835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.758 [2024-11-20 07:28:02.802299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.758 [2024-11-20 07:28:02.802329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.758 [2024-11-20 07:28:02.802338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.758 [2024-11-20 07:28:02.802506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.759 [2024-11-20 07:28:02.802658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.759 [2024-11-20 07:28:02.802664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.759 [2024-11-20 07:28:02.802671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.759 [2024-11-20 07:28:02.802676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.759 [2024-11-20 07:28:02.814534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.759 [2024-11-20 07:28:02.814973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.759 [2024-11-20 07:28:02.815004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.759 [2024-11-20 07:28:02.815013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.759 [2024-11-20 07:28:02.815184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.759 [2024-11-20 07:28:02.815337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.759 [2024-11-20 07:28:02.815344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.759 [2024-11-20 07:28:02.815350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.759 [2024-11-20 07:28:02.815356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.759 [2024-11-20 07:28:02.827229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.759 [2024-11-20 07:28:02.827755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.759 [2024-11-20 07:28:02.827785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.759 [2024-11-20 07:28:02.827794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.759 [2024-11-20 07:28:02.827960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.759 [2024-11-20 07:28:02.828113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.759 [2024-11-20 07:28:02.828119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.759 [2024-11-20 07:28:02.828125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.759 [2024-11-20 07:28:02.828131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.759 [2024-11-20 07:28:02.839842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.759 [2024-11-20 07:28:02.840354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.759 [2024-11-20 07:28:02.840370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.759 [2024-11-20 07:28:02.840375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.759 [2024-11-20 07:28:02.840528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.759 [2024-11-20 07:28:02.840677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.759 [2024-11-20 07:28:02.840683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.759 [2024-11-20 07:28:02.840689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.759 [2024-11-20 07:28:02.840694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.759 [2024-11-20 07:28:02.852543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.759 [2024-11-20 07:28:02.852957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.759 [2024-11-20 07:28:02.852970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.759 [2024-11-20 07:28:02.852978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.759 [2024-11-20 07:28:02.853126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.759 [2024-11-20 07:28:02.853279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.759 [2024-11-20 07:28:02.853285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.759 [2024-11-20 07:28:02.853291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.759 [2024-11-20 07:28:02.853295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.759 [2024-11-20 07:28:02.865136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.759 [2024-11-20 07:28:02.865720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.759 [2024-11-20 07:28:02.865751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.759 [2024-11-20 07:28:02.865759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.759 [2024-11-20 07:28:02.865925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.759 [2024-11-20 07:28:02.866077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.759 [2024-11-20 07:28:02.866084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.759 [2024-11-20 07:28:02.866089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.759 [2024-11-20 07:28:02.866095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.759 [2024-11-20 07:28:02.877801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.759 [2024-11-20 07:28:02.878276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.759 [2024-11-20 07:28:02.878307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.759 [2024-11-20 07:28:02.878316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.759 [2024-11-20 07:28:02.878483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.759 [2024-11-20 07:28:02.878635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.759 [2024-11-20 07:28:02.878648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.759 [2024-11-20 07:28:02.878653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.759 [2024-11-20 07:28:02.878659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.759 [2024-11-20 07:28:02.890376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.759 [2024-11-20 07:28:02.890937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.759 [2024-11-20 07:28:02.890967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.759 [2024-11-20 07:28:02.890976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.759 [2024-11-20 07:28:02.891141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.759 [2024-11-20 07:28:02.891300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.759 [2024-11-20 07:28:02.891307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.759 [2024-11-20 07:28:02.891312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.759 [2024-11-20 07:28:02.891318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.759 [2024-11-20 07:28:02.903023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.759 [2024-11-20 07:28:02.903586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.759 [2024-11-20 07:28:02.903616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.759 [2024-11-20 07:28:02.903625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.759 [2024-11-20 07:28:02.903790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.759 [2024-11-20 07:28:02.903942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.759 [2024-11-20 07:28:02.903948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.759 [2024-11-20 07:28:02.903954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.759 [2024-11-20 07:28:02.903960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.759 [2024-11-20 07:28:02.915671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.759 [2024-11-20 07:28:02.916137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.759 [2024-11-20 07:28:02.916151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.759 [2024-11-20 07:28:02.916157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.759 [2024-11-20 07:28:02.916311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.759 [2024-11-20 07:28:02.916461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.759 [2024-11-20 07:28:02.916467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.759 [2024-11-20 07:28:02.916472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.759 [2024-11-20 07:28:02.916481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.759 [2024-11-20 07:28:02.928349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.759 [2024-11-20 07:28:02.928927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.760 [2024-11-20 07:28:02.928957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.760 [2024-11-20 07:28:02.928966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.760 [2024-11-20 07:28:02.929131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.760 [2024-11-20 07:28:02.929289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.760 [2024-11-20 07:28:02.929296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.760 [2024-11-20 07:28:02.929302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.760 [2024-11-20 07:28:02.929308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.760 [2024-11-20 07:28:02.941023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.760 [2024-11-20 07:28:02.941492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.760 [2024-11-20 07:28:02.941507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.760 [2024-11-20 07:28:02.941513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.760 [2024-11-20 07:28:02.941662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.760 [2024-11-20 07:28:02.941810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.760 [2024-11-20 07:28:02.941816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.760 [2024-11-20 07:28:02.941821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.760 [2024-11-20 07:28:02.941826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.760 [2024-11-20 07:28:02.953678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.760 [2024-11-20 07:28:02.954136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.760 [2024-11-20 07:28:02.954148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.760 [2024-11-20 07:28:02.954154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.760 [2024-11-20 07:28:02.954307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.760 [2024-11-20 07:28:02.954456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.760 [2024-11-20 07:28:02.954462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.760 [2024-11-20 07:28:02.954467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.760 [2024-11-20 07:28:02.954472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.760 [2024-11-20 07:28:02.966320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.760 [2024-11-20 07:28:02.966854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.760 [2024-11-20 07:28:02.966889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.760 [2024-11-20 07:28:02.966898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.760 [2024-11-20 07:28:02.967063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.760 [2024-11-20 07:28:02.967220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.760 [2024-11-20 07:28:02.967227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.760 [2024-11-20 07:28:02.967234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.760 [2024-11-20 07:28:02.967241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.760 4737.17 IOPS, 18.50 MiB/s [2024-11-20T06:28:03.038Z] [2024-11-20 07:28:02.980096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.760 [2024-11-20 07:28:02.980595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.760 [2024-11-20 07:28:02.980610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.760 [2024-11-20 07:28:02.980615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.760 [2024-11-20 07:28:02.980764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.760 [2024-11-20 07:28:02.980912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.760 [2024-11-20 07:28:02.980918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.760 [2024-11-20 07:28:02.980923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.760 [2024-11-20 07:28:02.980928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.760 [2024-11-20 07:28:02.992771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.760 [2024-11-20 07:28:02.993272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.760 [2024-11-20 07:28:02.993303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.760 [2024-11-20 07:28:02.993312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.760 [2024-11-20 07:28:02.993476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.760 [2024-11-20 07:28:02.993628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.760 [2024-11-20 07:28:02.993635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.760 [2024-11-20 07:28:02.993640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.760 [2024-11-20 07:28:02.993646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.760 [2024-11-20 07:28:03.005370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.760 [2024-11-20 07:28:03.005828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.760 [2024-11-20 07:28:03.005843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.760 [2024-11-20 07:28:03.005849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.760 [2024-11-20 07:28:03.006001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.760 [2024-11-20 07:28:03.006150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.760 [2024-11-20 07:28:03.006156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.760 [2024-11-20 07:28:03.006166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.760 [2024-11-20 07:28:03.006171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:40.760 [2024-11-20 07:28:03.018011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:40.760 [2024-11-20 07:28:03.018575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.760 [2024-11-20 07:28:03.018606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:40.760 [2024-11-20 07:28:03.018615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:40.760 [2024-11-20 07:28:03.018780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:40.760 [2024-11-20 07:28:03.018932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:40.760 [2024-11-20 07:28:03.018939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:40.760 [2024-11-20 07:28:03.018944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:40.760 [2024-11-20 07:28:03.018950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.023 [2024-11-20 07:28:03.030675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.023 [2024-11-20 07:28:03.031146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.023 [2024-11-20 07:28:03.031164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.023 [2024-11-20 07:28:03.031171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.023 [2024-11-20 07:28:03.031320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.023 [2024-11-20 07:28:03.031469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.023 [2024-11-20 07:28:03.031475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.023 [2024-11-20 07:28:03.031480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.023 [2024-11-20 07:28:03.031485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.023 [2024-11-20 07:28:03.043331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.023 [2024-11-20 07:28:03.043790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.023 [2024-11-20 07:28:03.043802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.023 [2024-11-20 07:28:03.043807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.023 [2024-11-20 07:28:03.043955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.023 [2024-11-20 07:28:03.044103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.023 [2024-11-20 07:28:03.044114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.023 [2024-11-20 07:28:03.044119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.023 [2024-11-20 07:28:03.044123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.023 [2024-11-20 07:28:03.055968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.023 [2024-11-20 07:28:03.056405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.023 [2024-11-20 07:28:03.056436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.023 [2024-11-20 07:28:03.056444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.023 [2024-11-20 07:28:03.056612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.024 [2024-11-20 07:28:03.056764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.024 [2024-11-20 07:28:03.056771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.024 [2024-11-20 07:28:03.056776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.024 [2024-11-20 07:28:03.056782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.024 [2024-11-20 07:28:03.068640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.024 [2024-11-20 07:28:03.069188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.024 [2024-11-20 07:28:03.069219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.024 [2024-11-20 07:28:03.069228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.024 [2024-11-20 07:28:03.069395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.024 [2024-11-20 07:28:03.069548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.024 [2024-11-20 07:28:03.069555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.024 [2024-11-20 07:28:03.069560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.024 [2024-11-20 07:28:03.069566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.024 [2024-11-20 07:28:03.081300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.024 [2024-11-20 07:28:03.081542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.024 [2024-11-20 07:28:03.081570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.024 [2024-11-20 07:28:03.081577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.024 [2024-11-20 07:28:03.081731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.024 [2024-11-20 07:28:03.081883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.024 [2024-11-20 07:28:03.081890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.024 [2024-11-20 07:28:03.081895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.024 [2024-11-20 07:28:03.081900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.024 [2024-11-20 07:28:03.093895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.024 [2024-11-20 07:28:03.094273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.024 [2024-11-20 07:28:03.094304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.024 [2024-11-20 07:28:03.094314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.024 [2024-11-20 07:28:03.094481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.024 [2024-11-20 07:28:03.094633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.024 [2024-11-20 07:28:03.094639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.024 [2024-11-20 07:28:03.094645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.024 [2024-11-20 07:28:03.094650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.024 [2024-11-20 07:28:03.106506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.024 [2024-11-20 07:28:03.106973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.024 [2024-11-20 07:28:03.106988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.024 [2024-11-20 07:28:03.106993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.024 [2024-11-20 07:28:03.107142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.024 [2024-11-20 07:28:03.107295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.024 [2024-11-20 07:28:03.107302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.024 [2024-11-20 07:28:03.107307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.024 [2024-11-20 07:28:03.107312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.024 [2024-11-20 07:28:03.119153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.024 [2024-11-20 07:28:03.119675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.024 [2024-11-20 07:28:03.119706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.024 [2024-11-20 07:28:03.119714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.024 [2024-11-20 07:28:03.119879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.024 [2024-11-20 07:28:03.120031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.024 [2024-11-20 07:28:03.120037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.024 [2024-11-20 07:28:03.120042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.024 [2024-11-20 07:28:03.120048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.024 [2024-11-20 07:28:03.131774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.024 [2024-11-20 07:28:03.132389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.024 [2024-11-20 07:28:03.132423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.024 [2024-11-20 07:28:03.132432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.024 [2024-11-20 07:28:03.132597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.024 [2024-11-20 07:28:03.132748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.024 [2024-11-20 07:28:03.132755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.024 [2024-11-20 07:28:03.132760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.024 [2024-11-20 07:28:03.132766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.024 [2024-11-20 07:28:03.144481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.024 [2024-11-20 07:28:03.144948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.024 [2024-11-20 07:28:03.144964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.024 [2024-11-20 07:28:03.144969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.024 [2024-11-20 07:28:03.145119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.024 [2024-11-20 07:28:03.145272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.024 [2024-11-20 07:28:03.145279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.024 [2024-11-20 07:28:03.145284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.024 [2024-11-20 07:28:03.145290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.024 [2024-11-20 07:28:03.157134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.024 [2024-11-20 07:28:03.157679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.024 [2024-11-20 07:28:03.157711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.024 [2024-11-20 07:28:03.157720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.025 [2024-11-20 07:28:03.157884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.025 [2024-11-20 07:28:03.158036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.025 [2024-11-20 07:28:03.158043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.025 [2024-11-20 07:28:03.158048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.025 [2024-11-20 07:28:03.158054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.025 [2024-11-20 07:28:03.169764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.025 [2024-11-20 07:28:03.170379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.025 [2024-11-20 07:28:03.170410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.025 [2024-11-20 07:28:03.170419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.025 [2024-11-20 07:28:03.170587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.025 [2024-11-20 07:28:03.170739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.025 [2024-11-20 07:28:03.170746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.025 [2024-11-20 07:28:03.170752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.025 [2024-11-20 07:28:03.170757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.025 [2024-11-20 07:28:03.182345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.025 [2024-11-20 07:28:03.182809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.025 [2024-11-20 07:28:03.182824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.025 [2024-11-20 07:28:03.182830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.025 [2024-11-20 07:28:03.182978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.025 [2024-11-20 07:28:03.183127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.025 [2024-11-20 07:28:03.183133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.025 [2024-11-20 07:28:03.183138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.025 [2024-11-20 07:28:03.183143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.025 [2024-11-20 07:28:03.194998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.025 [2024-11-20 07:28:03.195586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.025 [2024-11-20 07:28:03.195618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.025 [2024-11-20 07:28:03.195626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.025 [2024-11-20 07:28:03.195791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.025 [2024-11-20 07:28:03.195943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.025 [2024-11-20 07:28:03.195950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.025 [2024-11-20 07:28:03.195955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.025 [2024-11-20 07:28:03.195961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.025 [2024-11-20 07:28:03.207675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.025 [2024-11-20 07:28:03.208149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.025 [2024-11-20 07:28:03.208168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.025 [2024-11-20 07:28:03.208174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.025 [2024-11-20 07:28:03.208323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.025 [2024-11-20 07:28:03.208471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.025 [2024-11-20 07:28:03.208477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.025 [2024-11-20 07:28:03.208486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.025 [2024-11-20 07:28:03.208491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.025 [2024-11-20 07:28:03.220338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.025 [2024-11-20 07:28:03.220809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.025 [2024-11-20 07:28:03.220821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.025 [2024-11-20 07:28:03.220827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.025 [2024-11-20 07:28:03.220975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.025 [2024-11-20 07:28:03.221123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.025 [2024-11-20 07:28:03.221129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.025 [2024-11-20 07:28:03.221134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.025 [2024-11-20 07:28:03.221138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.025 [2024-11-20 07:28:03.232997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.025 [2024-11-20 07:28:03.233571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.025 [2024-11-20 07:28:03.233602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.025 [2024-11-20 07:28:03.233610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.025 [2024-11-20 07:28:03.233775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.025 [2024-11-20 07:28:03.233927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.025 [2024-11-20 07:28:03.233934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.025 [2024-11-20 07:28:03.233939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.025 [2024-11-20 07:28:03.233947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.025 [2024-11-20 07:28:03.245664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.025 [2024-11-20 07:28:03.246223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.025 [2024-11-20 07:28:03.246254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.025 [2024-11-20 07:28:03.246263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.025 [2024-11-20 07:28:03.246431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.025 [2024-11-20 07:28:03.246583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.025 [2024-11-20 07:28:03.246590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.025 [2024-11-20 07:28:03.246596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.025 [2024-11-20 07:28:03.246602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.025 [2024-11-20 07:28:03.258329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.025 [2024-11-20 07:28:03.258793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.025 [2024-11-20 07:28:03.258808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.025 [2024-11-20 07:28:03.258814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.025 [2024-11-20 07:28:03.258963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.025 [2024-11-20 07:28:03.259112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.025 [2024-11-20 07:28:03.259118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.025 [2024-11-20 07:28:03.259124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.026 [2024-11-20 07:28:03.259130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.026 [2024-11-20 07:28:03.270973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.026 [2024-11-20 07:28:03.271535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.026 [2024-11-20 07:28:03.271565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.026 [2024-11-20 07:28:03.271575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.026 [2024-11-20 07:28:03.271742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.026 [2024-11-20 07:28:03.271894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.026 [2024-11-20 07:28:03.271901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.026 [2024-11-20 07:28:03.271907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.026 [2024-11-20 07:28:03.271913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.026 [2024-11-20 07:28:03.283640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.026 [2024-11-20 07:28:03.284106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.026 [2024-11-20 07:28:03.284121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.026 [2024-11-20 07:28:03.284126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.026 [2024-11-20 07:28:03.284279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.026 [2024-11-20 07:28:03.284430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.026 [2024-11-20 07:28:03.284436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.026 [2024-11-20 07:28:03.284441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.026 [2024-11-20 07:28:03.284446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.289 [2024-11-20 07:28:03.296288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.289 [2024-11-20 07:28:03.296793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-11-20 07:28:03.296827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.289 [2024-11-20 07:28:03.296836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.289 [2024-11-20 07:28:03.297000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.289 [2024-11-20 07:28:03.297152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.289 [2024-11-20 07:28:03.297164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.289 [2024-11-20 07:28:03.297170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.289 [2024-11-20 07:28:03.297175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.289 [2024-11-20 07:28:03.308881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.289 [2024-11-20 07:28:03.309306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-11-20 07:28:03.309321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.289 [2024-11-20 07:28:03.309327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.289 [2024-11-20 07:28:03.309476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.289 [2024-11-20 07:28:03.309625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.289 [2024-11-20 07:28:03.309631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.289 [2024-11-20 07:28:03.309636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.289 [2024-11-20 07:28:03.309642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.289 [2024-11-20 07:28:03.321488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.289 [2024-11-20 07:28:03.321946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-11-20 07:28:03.321958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.289 [2024-11-20 07:28:03.321964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.289 [2024-11-20 07:28:03.322112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.289 [2024-11-20 07:28:03.322273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.289 [2024-11-20 07:28:03.322279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.289 [2024-11-20 07:28:03.322285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.289 [2024-11-20 07:28:03.322290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.289 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:41.289 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:41.289 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:41.289 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:41.289 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:41.289 [2024-11-20 07:28:03.334131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.289 [2024-11-20 07:28:03.334606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-11-20 07:28:03.334618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.289 [2024-11-20 07:28:03.334624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.289 [2024-11-20 07:28:03.334772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.289 [2024-11-20 07:28:03.334921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.289 [2024-11-20 07:28:03.334927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.289 [2024-11-20 07:28:03.334932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.289 [2024-11-20 07:28:03.334936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.289 [2024-11-20 07:28:03.346780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.289 [2024-11-20 07:28:03.347368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-11-20 07:28:03.347398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.289 [2024-11-20 07:28:03.347407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.289 [2024-11-20 07:28:03.347572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.289 [2024-11-20 07:28:03.347724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.289 [2024-11-20 07:28:03.347732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.289 [2024-11-20 07:28:03.347738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.289 [2024-11-20 07:28:03.347744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.289 [2024-11-20 07:28:03.359458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.289 [2024-11-20 07:28:03.359936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-11-20 07:28:03.359951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.290 [2024-11-20 07:28:03.359958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.290 [2024-11-20 07:28:03.360107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.290 [2024-11-20 07:28:03.360262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.290 [2024-11-20 07:28:03.360269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.290 [2024-11-20 07:28:03.360274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.290 [2024-11-20 07:28:03.360279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:41.290 [2024-11-20 07:28:03.372121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.290 [2024-11-20 07:28:03.372446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-11-20 07:28:03.372459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.290 [2024-11-20 07:28:03.372464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.290 [2024-11-20 07:28:03.372613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.290 [2024-11-20 07:28:03.372761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.290 [2024-11-20 07:28:03.372767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.290 [2024-11-20 07:28:03.372772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.290 [2024-11-20 07:28:03.372777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.290 [2024-11-20 07:28:03.376546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:41.290 [2024-11-20 07:28:03.384768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.290 [2024-11-20 07:28:03.385196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-11-20 07:28:03.385216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.290 [2024-11-20 07:28:03.385222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.290 [2024-11-20 07:28:03.385375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.290 [2024-11-20 07:28:03.385525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.290 [2024-11-20 07:28:03.385531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.290 [2024-11-20 07:28:03.385537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.290 [2024-11-20 07:28:03.385542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.290 [2024-11-20 07:28:03.397387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.290 [2024-11-20 07:28:03.397849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-11-20 07:28:03.397861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.290 [2024-11-20 07:28:03.397867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.290 [2024-11-20 07:28:03.398016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.290 [2024-11-20 07:28:03.398169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.290 [2024-11-20 07:28:03.398175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.290 [2024-11-20 07:28:03.398180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.290 [2024-11-20 07:28:03.398189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.290 [2024-11-20 07:28:03.410028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.290 Malloc0 00:29:41.290 [2024-11-20 07:28:03.410523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-11-20 07:28:03.410536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.290 [2024-11-20 07:28:03.410541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.290 [2024-11-20 07:28:03.410690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.290 [2024-11-20 07:28:03.410838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.290 [2024-11-20 07:28:03.410844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.290 [2024-11-20 07:28:03.410849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.290 [2024-11-20 07:28:03.410854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:41.290 [2024-11-20 07:28:03.422702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:41.290 [2024-11-20 07:28:03.423166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-11-20 07:28:03.423179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.290 [2024-11-20 07:28:03.423184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.290 [2024-11-20 07:28:03.423334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.290 [2024-11-20 07:28:03.423482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.290 [2024-11-20 07:28:03.423488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.290 [2024-11-20 07:28:03.423493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.290 [2024-11-20 07:28:03.423498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:41.290 [2024-11-20 07:28:03.435339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:41.290 [2024-11-20 07:28:03.435761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-11-20 07:28:03.435774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7b000 with addr=10.0.0.2, port=4420 00:29:41.290 [2024-11-20 07:28:03.435780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7b000 is same with the state(6) to be set 00:29:41.290 [2024-11-20 07:28:03.435927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7b000 (9): Bad file descriptor 00:29:41.290 [2024-11-20 07:28:03.436076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:41.290 [2024-11-20 07:28:03.436082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:41.290 [2024-11-20 07:28:03.436087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:41.290 [2024-11-20 07:28:03.436092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:41.290 [2024-11-20 07:28:03.441944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.290 07:28:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3702230 00:29:41.290 [2024-11-20 07:28:03.447940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:41.290 [2024-11-20 07:28:03.470973] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:42.804 5010.29 IOPS, 19.57 MiB/s [2024-11-20T06:28:06.023Z] 6004.50 IOPS, 23.46 MiB/s [2024-11-20T06:28:07.405Z] 6778.33 IOPS, 26.48 MiB/s [2024-11-20T06:28:08.349Z] 7379.50 IOPS, 28.83 MiB/s [2024-11-20T06:28:09.288Z] 7901.09 IOPS, 30.86 MiB/s [2024-11-20T06:28:10.225Z] 8322.08 IOPS, 32.51 MiB/s [2024-11-20T06:28:11.163Z] 8676.77 IOPS, 33.89 MiB/s [2024-11-20T06:28:12.102Z] 8974.93 IOPS, 35.06 MiB/s [2024-11-20T06:28:12.102Z] 9244.47 IOPS, 36.11 MiB/s 00:29:49.824 Latency(us) 00:29:49.824 [2024-11-20T06:28:12.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.824 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:49.824 Verification LBA range: start 0x0 length 0x4000 00:29:49.824 Nvme1n1 : 15.00 9245.37 36.11 13358.14 0.00 5643.50 549.55 12997.97 00:29:49.824 [2024-11-20T06:28:12.102Z] =================================================================================================================== 00:29:49.824 [2024-11-20T06:28:12.102Z] Total : 9245.37 36.11 13358.14 0.00 5643.50 549.55 12997.97 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.085 rmmod nvme_tcp 00:29:50.085 rmmod nvme_fabrics 00:29:50.085 rmmod nvme_keyring 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3703259 ']' 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3703259 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 3703259 ']' 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 3703259 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3703259 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3703259' 00:29:50.085 killing process with pid 3703259 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 3703259 00:29:50.085 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 3703259 00:29:50.345 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:50.345 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:50.345 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:50.345 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:50.345 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:50.345 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:50.345 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:50.345 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.345 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.345 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.345 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.345 07:28:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.255 07:28:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:52.255 00:29:52.255 real 0m28.366s 00:29:52.255 user 1m3.675s 00:29:52.255 sys 0m7.667s 00:29:52.255 07:28:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:52.255 07:28:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.255 ************************************ 00:29:52.255 END TEST nvmf_bdevperf 00:29:52.255 ************************************ 00:29:52.255 07:28:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:52.255 07:28:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:52.255 07:28:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:52.255 07:28:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.255 ************************************ 00:29:52.255 START TEST nvmf_target_disconnect 00:29:52.255 ************************************ 00:29:52.255 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:52.519 * Looking for test storage... 00:29:52.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:52.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.519 --rc genhtml_branch_coverage=1 00:29:52.519 --rc genhtml_function_coverage=1 00:29:52.519 --rc genhtml_legend=1 00:29:52.519 --rc geninfo_all_blocks=1 00:29:52.519 --rc geninfo_unexecuted_blocks=1 00:29:52.519 00:29:52.519 ' 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:52.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.519 --rc genhtml_branch_coverage=1 00:29:52.519 --rc genhtml_function_coverage=1 00:29:52.519 --rc genhtml_legend=1 00:29:52.519 --rc geninfo_all_blocks=1 00:29:52.519 --rc geninfo_unexecuted_blocks=1 00:29:52.519 00:29:52.519 ' 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:52.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.519 --rc genhtml_branch_coverage=1 00:29:52.519 --rc genhtml_function_coverage=1 00:29:52.519 --rc genhtml_legend=1 00:29:52.519 --rc geninfo_all_blocks=1 00:29:52.519 --rc geninfo_unexecuted_blocks=1 00:29:52.519 00:29:52.519 ' 00:29:52.519 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:52.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.519 --rc genhtml_branch_coverage=1 00:29:52.519 --rc genhtml_function_coverage=1 00:29:52.519 --rc genhtml_legend=1 00:29:52.520 --rc geninfo_all_blocks=1 00:29:52.520 --rc geninfo_unexecuted_blocks=1 00:29:52.520 00:29:52.520 ' 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:52.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.520 07:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.661 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:00.662 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:00.662 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:00.662 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:00.662 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:00.662 07:28:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:00.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:30:00.662 00:30:00.662 --- 10.0.0.2 ping statistics --- 00:30:00.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.662 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:00.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:30:00.662 00:30:00.662 --- 10.0.0.1 ping statistics --- 00:30:00.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.662 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:00.662 ************************************ 00:30:00.662 START TEST nvmf_target_disconnect_tc1 00:30:00.662 ************************************ 00:30:00.662 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:00.663 [2024-11-20 07:28:22.476794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.663 [2024-11-20 07:28:22.476896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa4ad0 with addr=10.0.0.2, port=4420 00:30:00.663 [2024-11-20 07:28:22.476924] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:00.663 [2024-11-20 07:28:22.476942] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:00.663 [2024-11-20 07:28:22.476950] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:00.663 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:00.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:00.663 Initializing NVMe Controllers 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:00.663 00:30:00.663 real 0m0.146s 00:30:00.663 user 0m0.060s 00:30:00.663 sys 0m0.087s 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:00.663 ************************************ 00:30:00.663 END TEST nvmf_target_disconnect_tc1 00:30:00.663 ************************************ 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:00.663 ************************************ 00:30:00.663 START TEST nvmf_target_disconnect_tc2 00:30:00.663 ************************************ 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3709334 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3709334 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3709334 ']' 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:00.663 07:28:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.663 [2024-11-20 07:28:22.639637] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:30:00.663 [2024-11-20 07:28:22.639698] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.663 [2024-11-20 07:28:22.739540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:00.663 [2024-11-20 07:28:22.792152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.663 [2024-11-20 07:28:22.792210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.663 [2024-11-20 07:28:22.792219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.663 [2024-11-20 07:28:22.792226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.663 [2024-11-20 07:28:22.792232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.663 [2024-11-20 07:28:22.794219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:00.663 [2024-11-20 07:28:22.794462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:00.663 [2024-11-20 07:28:22.794622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:00.663 [2024-11-20 07:28:22.794622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:01.235 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:01.235 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:30:01.235 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:01.235 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:01.235 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.235 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.235 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:01.235 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.235 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.497 Malloc0 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.497 [2024-11-20 07:28:23.559326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.497 [2024-11-20 07:28:23.599701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3709661 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:01.497 07:28:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:03.413 07:28:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3709334 00:30:03.413 07:28:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 [2024-11-20 07:28:25.638945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Write completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 Read completed with error (sct=0, sc=8) 00:30:03.413 starting I/O failed 00:30:03.413 [2024-11-20 07:28:25.639257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.413 [2024-11-20 07:28:25.639674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-11-20 07:28:25.639697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-11-20 07:28:25.640028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-11-20 07:28:25.640041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-11-20 07:28:25.640543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-11-20 07:28:25.640608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-11-20 07:28:25.641014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.413 [2024-11-20 07:28:25.641031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.413 qpair failed and we were unable to recover it. 00:30:03.413 [2024-11-20 07:28:25.641444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.641509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.641878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.641895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.642131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.642146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.642522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.642536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.642729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.642743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.643056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.643070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.643442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.643458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.643766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.643781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.644227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.644245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.644465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.644483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.644683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.644697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.645003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.645017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.645339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.645354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.645701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.645716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.646063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.646077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.646310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.646329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.646659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.646674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.647028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.647042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.647259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.647276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.647624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.647638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.647940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.647955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.648276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.648292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.648642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.648658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.648973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.648986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.649239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.649253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.649639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.649653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.649997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.650011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.650228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.650241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.650552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.650565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.650913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.650929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.651268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.651283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.651571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.651584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.651896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.651908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.652115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.652129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.652517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.652531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.652829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.652842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.653146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.653184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.414 [2024-11-20 07:28:25.653583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.414 [2024-11-20 07:28:25.653597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.414 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.653914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.653926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.654274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.654289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.654526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.654539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.654870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.654883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.655182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.655204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.655535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.655549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.655901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.655915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.656215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.656228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.656419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.656431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.656731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.656744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.657068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.657081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.657435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.657449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.657804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.657818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.658144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.658170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.658504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.658520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.658844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.658860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.659163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.659178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.659514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.659530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.659725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.659741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.660074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.660088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.660395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.660410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.660592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.660608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.660799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.660813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.661154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.661182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.661525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.661539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.661858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.661873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.662224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.662240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.662552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.662568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.662870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.662883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.663238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.663254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.663441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.663456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.663765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.663780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.664112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.664127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.664440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.664456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.664767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.664782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.665109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.665123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.665331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.665346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.665650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.415 [2024-11-20 07:28:25.665663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.415 qpair failed and we were unable to recover it. 00:30:03.415 [2024-11-20 07:28:25.666011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.666026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.666338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.666353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.666657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.666671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.666970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.666984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.667214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.667228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.667560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.667574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.667882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.667895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.668213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.668228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.668530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.668543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.668870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.668884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.669201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.669215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.669523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.669539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.669842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.669859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.670182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.670201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.670523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.670544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.670848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.670867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.671176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.671196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.671518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.671536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.671839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.671857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.672188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.672209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.672416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.672436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.672756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.672774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.673075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.673094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.673421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.673442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.673763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.673781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.674003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.674020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.674341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.674361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.674701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.674722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.675036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.675054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.675376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.675397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.675719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.675738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.676052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.676071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.676411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.676431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.676776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.676795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.677126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.677149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.677458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.677479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.677794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.677812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.678137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.416 [2024-11-20 07:28:25.678167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.416 qpair failed and we were unable to recover it. 00:30:03.416 [2024-11-20 07:28:25.678495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.678514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.678836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.678854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.679179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.679199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.679438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.679456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.679773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.679793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.679972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.679995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.680334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.680354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.680673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.680693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.681008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.681032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.681393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.681419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.681763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.681787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.682126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.682152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.682516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.682541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.682888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.682913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.683241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.683266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.683609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.683634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.683984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.684008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.684383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.417 [2024-11-20 07:28:25.684408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.417 qpair failed and we were unable to recover it. 00:30:03.417 [2024-11-20 07:28:25.684751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-11-20 07:28:25.684775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-11-20 07:28:25.685141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-11-20 07:28:25.685172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-11-20 07:28:25.685522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-11-20 07:28:25.685548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-11-20 07:28:25.685861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-11-20 07:28:25.685885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-11-20 07:28:25.686121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-11-20 07:28:25.686146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-11-20 07:28:25.686502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-11-20 07:28:25.686531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-11-20 07:28:25.686879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-11-20 07:28:25.686904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-11-20 07:28:25.687235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-11-20 07:28:25.687262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-11-20 07:28:25.687593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-11-20 07:28:25.687615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-11-20 07:28:25.687995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-11-20 07:28:25.688018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-11-20 07:28:25.688342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-11-20 07:28:25.688369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-11-20 07:28:25.688723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-11-20 07:28:25.688746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-11-20 07:28:25.689139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-11-20 07:28:25.689171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.689 qpair failed and we were unable to recover it. 00:30:03.689 [2024-11-20 07:28:25.689532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.689 [2024-11-20 07:28:25.689557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.689925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.689950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.690315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.690352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.690711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.690736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.691088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.691113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.691338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.691362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.691711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.691737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.692074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.692105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.692467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.692501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.692865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.692897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.693261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.693293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.693531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.693562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.693918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.693950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.694324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.694357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.694715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.694747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.695112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.695141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.695554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.695585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.695942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.695973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.696337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.696370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.696738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.696776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.697127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.697171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.697502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.697535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.697895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.697927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.698344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.698377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.698719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.698752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.699106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.699138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.699497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.699531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.699879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.699910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.700286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.700319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.700552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.700582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.700933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.700965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.701329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.701361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.701717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.701749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.702110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.702143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.702572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.702603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.702950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.702982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.703375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.703408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.703758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.703790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.690 qpair failed and we were unable to recover it. 00:30:03.690 [2024-11-20 07:28:25.704143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.690 [2024-11-20 07:28:25.704182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.704532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.704564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.704926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.704958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.705317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.705350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.705703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.705734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.706087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.706119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.706506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.706539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.706890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.706922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.707294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.707326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.707689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.707721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.708075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.708107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.708463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.708496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.708851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.708882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.709275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.709308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.709697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.709727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.710079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.710111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.710468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.710500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.710852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.710885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.711235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.711268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.711636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.711668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.712013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.712043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.712401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.712435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.712831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.712864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.713095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.713130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.713500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.713534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.713894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.713927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.714268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.714300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.714670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.714701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.715060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.715092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.715517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.715550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.715905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.715936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.716301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.716334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.716729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.716759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.717151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.717192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.717538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.717569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.717935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.717967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.718321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.718355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.718713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.718745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.691 [2024-11-20 07:28:25.719100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.691 [2024-11-20 07:28:25.719130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.691 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.719488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.719523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.719867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.719900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.720251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.720284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.720720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.720752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.721109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.721140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.721498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.721531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.721883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.721914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.722285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.722318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.722666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.722698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.723058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.723088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.723430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.723470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.723818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.723850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.724208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.724240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.724610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.724642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.725002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.725033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.725359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.725392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.725671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.725701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.726045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.726077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.726479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.726511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.726867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.726898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.727251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.727283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.727648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.727679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.728035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.728067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.728459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.728491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.728851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.728882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.729235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.729266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.729640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.729672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.730019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.730051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.730408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.730440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.730789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.730819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.731186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.731217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.731580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.731611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.731975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.732007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.732384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.732417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.732763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.732795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.733131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.733170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.733519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.733550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.733910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.692 [2024-11-20 07:28:25.733948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.692 qpair failed and we were unable to recover it. 00:30:03.692 [2024-11-20 07:28:25.734295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.734327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.734685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.734716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.735082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.735114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.735472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.735504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.735854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.735885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.736240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.736273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.736647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.736678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.737030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.737061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.737431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.737465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.737817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.737848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.738215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.738247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.738628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.738661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.739007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.739038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.739429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.739462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.739820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.739852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.740207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.740240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.740597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.740630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.740982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.741012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.741352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.741385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.741750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.741781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.742155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.742209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.742582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.742614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.742855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.742889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.743245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.743278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.743634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.743667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.744003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.744036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.744412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.744445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.744828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.744860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.745264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.745298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.745655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.745688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.746056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.746088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.746436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.746468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.746825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.746858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.747215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.747249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.747648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.747679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.748028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.748061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.748424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.748458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.693 [2024-11-20 07:28:25.748810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.693 [2024-11-20 07:28:25.748841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.693 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.749192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.749223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.749546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.749577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.749939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.749971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.750359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.750393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.750737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.750767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.751243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.751277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.751621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.751653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.752074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.752105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.752497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.752530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.752889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.752922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.753184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.753217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.753590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.753622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.753974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.754007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.754390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.754422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.754772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.754804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.755185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.755218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.755580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.755611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.755965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.755997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.756239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.756275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.756693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.756725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.757102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.757133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.757395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.757428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.757791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.757824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.758143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.758200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.758561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.758591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.758953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.758985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.759355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.759390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.759745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.759777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.760123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.760156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.760549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.760587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.760937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.760969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.694 [2024-11-20 07:28:25.761335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.694 [2024-11-20 07:28:25.761367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.694 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.761720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.761752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.762091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.762123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.762499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.762531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.762889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.762921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.763280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.763313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.763676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.763708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.764076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.764107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.764462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.764494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.764852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.764884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.765240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.765274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.765716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.765747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.766097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.766129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.766514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.766547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.766942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.766973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.767327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.767361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.767713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.767743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.768100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.768132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.768498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.768530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.768894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.768926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.769285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.769317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.769754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.769786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.770144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.770189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.770470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.770501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.770848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.770879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.771232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.771271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.771642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.771676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.772051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.772082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.772439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.772472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.772705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.772739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.773084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.773115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.773479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.773512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.773878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.773910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.774266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.774300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.774545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.774575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.774933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.774964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.775199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.775231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.775634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.775664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.776020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.695 [2024-11-20 07:28:25.776051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.695 qpair failed and we were unable to recover it. 00:30:03.695 [2024-11-20 07:28:25.776389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.776421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.776793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.776827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.777179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.777213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.777576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.777607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.777979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.778011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.778371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.778403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.778760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.778790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.779157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.779200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.779550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.779582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.779929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.779960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.780313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.780347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.780719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.780749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.781111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.781141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.781500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.781537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.781890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.781923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.782285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.782318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.782685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.782716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.783083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.783114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.783521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.783554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.783904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.783936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.784265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.784298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.784666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.784697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.785053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.785085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.785477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.785510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.785895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.785926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.786278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.786311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.786677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.786708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.787068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.787100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.787480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.787512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.787865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.787897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.788131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.788177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.788549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.788581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.788934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.788966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.789324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.789357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.789710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.789742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.790103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.790135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.790517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.790549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.790904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.696 [2024-11-20 07:28:25.790936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.696 qpair failed and we were unable to recover it. 00:30:03.696 [2024-11-20 07:28:25.791295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.791327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.791679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.791711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.792066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.792098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.792445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.792478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.792831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.792862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.793225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.793259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.793627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.793658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.794024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.794055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.794415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.794449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.794816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.794847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.795212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.795245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.795638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.795669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.795999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.796029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.796385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.796417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.796762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.796794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.797177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.797212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.797604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.797639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.797996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.798027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.798393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.798427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.798774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.798807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.799179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.799212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.799554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.799587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.799946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.799978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.800328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.800362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.800612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.800643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.801065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.801097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.801470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.801503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.801854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.801886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.802248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.802280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.802631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.802662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.803015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.803046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.803413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.803446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.803799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.803832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.804193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.804225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.804587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.804618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.804967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.804998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.805342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.805373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.805736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.805765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.697 qpair failed and we were unable to recover it. 00:30:03.697 [2024-11-20 07:28:25.806133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.697 [2024-11-20 07:28:25.806186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.806568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.806600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.806952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.806985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.807338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.807373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.807721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.807753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.808113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.808156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.808533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.808565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.808919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.808950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.809307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.809338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.809678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.809710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.810063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.810093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.810437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.810471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.810818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.810849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.811204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.811237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.811658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.811689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.812045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.812077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.812321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.812354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.812759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.812790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.813145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.813186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.813546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.813577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.813908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.813942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.814291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.814323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.814692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.814723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.815084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.815117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.815476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.815508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.815872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.815903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.816254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.816289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.816664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.816693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.816924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.816957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.817296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.817329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.817706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.817737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.818091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.818123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.818485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.818523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.818876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.698 [2024-11-20 07:28:25.818908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.698 qpair failed and we were unable to recover it. 00:30:03.698 [2024-11-20 07:28:25.819263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.819296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.819697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.819728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.820103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.820134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.820498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.820532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.820854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.820885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.821231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.821266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.821639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.821670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.822020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.822052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.822417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.822451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.822778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.822809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.823145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.823186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.823538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.823571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.823963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.823995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.824349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.824382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.824742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.824773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.825130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.825173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.825534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.825564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.825915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.825947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.826313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.826347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.826773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.826804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.827152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.827194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.827548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.827579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.827936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.827968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.828330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.828364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.828724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.828755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.829119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.829150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.829507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.829539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.829911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.829943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.830310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.830343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.830727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.830760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.831118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.831148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.831517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.831549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.831906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.831937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.832302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.832334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.832677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.832710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.833059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.833091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.833474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.833508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.833881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.699 [2024-11-20 07:28:25.833914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.699 qpair failed and we were unable to recover it. 00:30:03.699 [2024-11-20 07:28:25.834263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.834296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.834660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.834692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.835042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.835075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.835435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.835468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.835712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.835742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.836097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.836128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.836482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.836515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.836918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.836949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.837300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.837333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.837696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.837726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.838087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.838118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.838517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.838550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.838915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.838947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.839302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.839334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.839695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.839727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.840078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.840108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.840473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.840505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.840856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.840887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.841239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.841273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.841633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.841664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.842041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.842072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.842432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.842465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.842823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.842854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.843207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.843239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.843589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.843620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.843990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.844022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.844361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.844394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.844753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.844785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.845143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.845189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.845541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.845571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.845929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.845961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.846315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.846348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.846728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.846758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.847112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.847144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.847493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.847525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.847888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.847919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.848281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.848314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.700 qpair failed and we were unable to recover it. 00:30:03.700 [2024-11-20 07:28:25.848672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.700 [2024-11-20 07:28:25.848703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.849054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.849087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.849415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.849449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.849787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.849818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.850169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.850202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.850503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.850534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.850884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.850914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.851278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.851312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.851747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.851778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.852139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.852179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.852526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.852559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.852918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.852948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.853305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.853337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.853712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.853744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.854100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.854132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.854526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.854559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.854909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.854942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.855312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.855346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.855703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.855740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.856089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.856121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.856461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.856493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.856840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.856871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.857236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.857270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.857634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.857665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.857998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.858030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.858385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.858418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.858656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.858686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.859042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.859072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.859428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.859461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.859816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.859846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.860204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.860238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.860586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.860617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.860876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.860910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.861237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.861270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.861654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.861686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.862041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.862073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.862437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.862469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.862822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.862854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.701 qpair failed and we were unable to recover it. 00:30:03.701 [2024-11-20 07:28:25.863229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.701 [2024-11-20 07:28:25.863261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.863623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.863654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.864006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.864037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.864405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.864440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.864794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.864824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.865195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.865228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.865624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.865656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.866019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.866056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.866301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.866336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.866727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.866759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.867117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.867148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.867511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.867544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.867903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.867934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.868307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.868340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.868672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.868702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.869062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.869093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.869459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.869492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.869850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.869883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.870139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.870183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.870426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.870457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.870809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.870843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.871195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.871226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.871611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.871642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.872001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.872032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.872402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.872435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.872831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.872862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.873086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.873117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.873496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.873528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.873888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.873920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.874271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.874303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.874678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.874708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.875050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.875083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.875420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.875453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.875813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.875844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.876214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.876247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.876613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.876645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.876878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.876908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.877277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.877308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.702 [2024-11-20 07:28:25.877697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.702 [2024-11-20 07:28:25.877729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.702 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.878080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.878111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.878511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.878544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.878921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.878953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.879297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.879330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.879683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.879716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.880074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.880107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.880379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.880415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.880794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.880826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.881180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.881211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.881576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.881608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.884394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.884456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.884773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.884810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.885191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.885226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.885581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.885613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.885970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.886003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.886258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.886295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.886727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.886760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.887104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.887135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.887544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.887577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.887924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.887957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.888318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.888350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.888707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.888739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.889087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.889120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.889514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.889546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.889877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.889907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.890286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.890320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.890568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.890598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.890966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.890998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.891343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.891378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.891737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.891769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.892126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.892166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.892553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.892587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.703 [2024-11-20 07:28:25.892931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.703 [2024-11-20 07:28:25.892963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.703 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.893339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.893372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.893741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.893772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.894130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.894204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.894561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.894598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.894973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.895004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.895384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.895418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.895783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.895814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.896187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.896222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.896594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.896626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.896911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.896943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.897196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.897229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.897596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.897627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.897994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.898025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.898383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.898415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.898775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.898805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.899173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.899208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.899565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.899597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.899971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.900003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.900372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.900404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.900753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.900784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.901145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.901193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.901448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.901478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.901882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.901915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.902292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.902326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.902694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.902726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.903089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.903121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.903520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.903553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.903920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.903952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.904304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.904339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.904679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.904711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.904946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.904984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.905355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.905391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.905732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.905765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.906140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.906180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.906570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.906601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.906988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.907019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.907428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.907461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.907842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.704 [2024-11-20 07:28:25.907875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-11-20 07:28:25.908282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.908316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.908694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.908725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.909081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.909115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.909522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.909554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.909804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.909836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.910188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.910220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.910597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.910630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.911031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.911064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.911398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.911430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.911792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.911822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.912196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.912230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.912599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.912631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.912988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.913021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.913380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.913413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.913772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.913806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.914172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.914206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.914568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.914601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.914938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.914970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.915324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.915356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.915716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.915749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.916097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.916128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.916514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.916547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.916923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.916955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.917209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.917242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.917587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.917619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.917995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.918026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.918398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.918432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.918805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.918836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.919204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.919239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.919616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.919648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.920010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.920042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.920402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.920434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.920821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.920852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.921217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.921275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.921543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.921576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.921807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.921838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.922193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.922226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.922624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.705 [2024-11-20 07:28:25.922655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-11-20 07:28:25.922888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.922920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.923282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.923315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.923563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.923596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.923955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.923986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.924345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.924380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.924746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.924779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.925143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.925194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.925574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.925605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.925971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.926003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.926408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.926441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.926893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.926926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.927285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.927317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.927681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.927712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.928089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.928121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.928394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.928425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.928770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.928801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.929195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.929228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.929594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.929626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.929975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.930008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.930344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.930377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.930513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.930543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.930909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.930940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.931376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.931415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.931643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.931673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.932031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.932062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.932485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.932518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.932867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.932897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.933151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.933190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.933540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.933571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.933792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.933822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.934181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.934212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.934577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.934611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.934846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.934882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.935116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.935149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.935530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.935564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.935935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.935967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.936309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.936342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.936600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.706 [2024-11-20 07:28:25.936631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.706 qpair failed and we were unable to recover it. 00:30:03.706 [2024-11-20 07:28:25.936988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.937018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.937252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.937283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.937661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.937692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.937899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.937929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.938303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.938336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.938694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.938726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.939082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.939114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.939478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.939511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.939754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.939787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.940032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.940065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.940192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.940225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.940563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.940602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.940965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.940997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.941382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.941414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.941802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.941833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.942061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.942091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.942499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.942531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.942779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.942810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.943170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.943201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.943571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.943605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.943902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.943935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.944314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.944347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.944693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.944723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.945010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.945041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.945318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.945349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.945735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.945765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.946005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.946036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.946264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.946296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.946667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.946698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.947064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.947096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.947471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.947503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.947862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.947893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.948254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.948288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.948661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.948693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.949035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.949066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.949461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.949494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.949837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.949868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.950235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.950267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.707 qpair failed and we were unable to recover it. 00:30:03.707 [2024-11-20 07:28:25.950525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.707 [2024-11-20 07:28:25.950562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-11-20 07:28:25.950931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-11-20 07:28:25.950962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-11-20 07:28:25.951316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-11-20 07:28:25.951350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-11-20 07:28:25.951708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-11-20 07:28:25.951738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-11-20 07:28:25.952170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-11-20 07:28:25.952203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-11-20 07:28:25.952464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-11-20 07:28:25.952499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-11-20 07:28:25.952875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-11-20 07:28:25.952907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.708 [2024-11-20 07:28:25.953254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.708 [2024-11-20 07:28:25.953288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.708 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.953638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.953672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.954049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.954083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.954453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.954486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.954838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.954870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.955282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.955313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.955670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.955700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.956042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.956075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.956462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.956494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.956862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.956892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.957137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.957200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.957573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.957604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.958031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.958063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.958418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.958450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.958831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.958863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.958984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.959019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.959391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.959424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.959585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.959615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.959985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.960017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.960375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.960407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.960761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.960792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.961155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.961211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.961564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.961597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.961953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.961984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.962236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.962268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.962648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.962679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.963043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.963075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.963435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.963468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.963831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.963863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.964109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.964139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.964486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.964518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.964866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.964897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.965246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.965277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.965650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.965681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.987 [2024-11-20 07:28:25.966040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.987 [2024-11-20 07:28:25.966072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.987 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.966409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.966443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.966795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.966826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.967257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.967290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.967654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.967686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.968047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.968078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.968417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.968449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.968701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.968731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.969060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.969091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.969464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.969496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.969738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.969773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.970154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.970200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.970551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.970583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.970942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.970974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.971339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.971374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.971728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.971760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.972115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.972147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.972341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.972372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.972754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.972785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.973145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.973186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.973532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.973563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.973918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.973950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.974308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.974342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.974695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.974727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.975103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.975133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.975505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.975535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.975896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.975928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.976300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.976340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.976714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.976746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.977017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.977048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.977399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.977433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.977822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.977854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.978248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.978280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.978660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.978691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.979055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.979087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.979532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.979564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.979920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.979952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.980308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.980340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.980707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.988 [2024-11-20 07:28:25.980739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.988 qpair failed and we were unable to recover it. 00:30:03.988 [2024-11-20 07:28:25.980976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.981006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.981262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.981298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.981668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.981699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.982063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.982095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.982467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.982501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.982861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.982893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.983250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.983284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.983649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.983681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.984098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.984129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.984463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.984496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.984863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.984894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.985234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.985269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.985641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.985672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.986028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.986062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.986399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.986431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.986786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.986825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.987178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.987211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.987569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.987599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.987847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.987881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.988238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.988270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.988649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.988680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.989071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.989102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.989438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.989469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.989827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.989858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.990219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.990253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.990623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.990653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.990956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.990987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.991352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.991386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.991742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.991773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.992204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.992236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.992611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.992642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.992941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.992972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.993328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.993359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.993722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.993754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.994111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.994144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.994532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.994564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.994819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.994849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.995201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.989 [2024-11-20 07:28:25.995234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.989 qpair failed and we were unable to recover it. 00:30:03.989 [2024-11-20 07:28:25.995583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:25.995612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:25.995984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:25.996014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:25.996383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:25.996415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:25.996771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:25.996801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:25.997171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:25.997203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:25.997569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:25.997600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:25.997957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:25.997989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:25.998349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:25.998381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:25.998728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:25.998759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:25.999114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:25.999147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:25.999516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:25.999548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:25.999910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:25.999941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.000317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.000352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.000584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.000618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.000969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.001000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.001345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.001379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.001755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.001786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.002186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.002219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.002572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.002606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.002950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.002980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.003346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.003377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.003744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.003776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.004126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.004170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.004523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.004553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.004904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.004936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.005306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.005340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.005764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.005797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.006145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.006194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.006588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.006618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.006968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.007000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.007343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.007373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.007754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.007785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.008137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.008177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.008535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.008568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.008918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.008949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.009316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.009349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.009778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.009810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.010055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.010090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.990 [2024-11-20 07:28:26.010443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.990 [2024-11-20 07:28:26.010476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.990 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.010819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.010852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.011219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.011252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.011615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.011647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.012003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.012035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.012277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.012312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.012687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.012717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.013054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.013091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.013445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.013480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.013843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.013875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.014234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.014268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.014632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.014663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.015022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.015053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.015460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.015493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.015847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.015879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.016312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.016345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.016719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.016752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.017097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.017129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.017480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.017512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.017873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.017905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.018264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.018296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.018647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.018680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.019036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.019067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.019430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.019462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.019807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.019839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.020218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.020249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.020595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.020627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.020984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.021017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.021397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.021429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.021792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.021824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.022186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.022219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.022581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.022615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.991 qpair failed and we were unable to recover it. 00:30:03.991 [2024-11-20 07:28:26.022968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.991 [2024-11-20 07:28:26.022999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.023344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.023376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.023736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.023775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.024127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.024166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.024524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.024556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.024812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.024843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.025195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.025227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.025626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.025658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.026009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.026043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.026285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.026318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.026710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.026743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.027093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.027125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.027457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.027491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.027857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.027888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.028286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.028318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.028554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.028587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.028932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.028964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.029195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.029229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.029601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.029633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.029982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.030013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.030386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.030421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.030771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.030803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.031174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.031206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.031554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.031586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.031833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.031866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.032192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.032223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.032576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.032608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.032964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.032997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.033341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.033372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.033736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.033774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.034115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.034149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.034549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.034580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.034934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.034965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.035314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.035348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.035698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.035728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.035963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.035997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.036350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.036382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.036546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.992 [2024-11-20 07:28:26.036576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.992 qpair failed and we were unable to recover it. 00:30:03.992 [2024-11-20 07:28:26.036825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.036856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.037101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.037131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.037511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.037544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.037889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.037922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.038282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.038315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.038716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.038748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.039105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.039135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.039494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.039527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.039885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.039917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.040278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.040310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.040657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.040690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.040926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.040961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.041310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.041344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.041714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.041745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.042108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.042140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.042486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.042519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.042915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.042946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.043179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.043211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.043484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.043514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.043887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.043918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.044272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.044307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.044531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.044561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.044899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.044931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.045288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.045319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.045719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.045751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.046113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.046144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.046387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.046418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.046789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.046820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.047177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.047211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.047560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.047591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.047948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.047979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.048338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.048373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.048780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.048813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.049155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.049199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.049463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.049494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.049840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.049872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.050231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.050263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.050631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.050664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.993 [2024-11-20 07:28:26.051011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.993 [2024-11-20 07:28:26.051042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.993 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.051378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.051410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.051778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.051810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.052051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.052082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.052434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.052466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.052813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.052845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.053215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.053249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.053619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.053651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.054006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.054039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.054396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.054430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.054793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.054825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.055182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.055214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.055572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.055602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.055971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.056002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.056363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.056397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.056746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.056777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.057124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.057182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.057543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.057574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.057930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.057961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.058317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.058352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.058693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.058724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.059003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.059039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.059398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.059432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.059692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.059725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.059961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.059992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.060229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.060261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.060659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.060690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.061034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.061068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.061412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.061445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.061815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.061847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.062207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.062238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.062649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.062682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.063040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.063073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.063423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.063455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.063687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.063719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.064073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.064104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.064341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.064376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.994 [2024-11-20 07:28:26.064720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.994 [2024-11-20 07:28:26.064751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.994 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.065006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.065036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.065400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.065432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.065733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.065765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.066106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.066137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.066504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.066536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.066895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.066928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.067270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.067304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.067664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.067696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.068048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.068081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.068435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.068467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.068810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.068848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.069188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.069220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.069576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.069607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.069958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.069990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.070327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.070359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.070712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.070743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.071101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.071133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.071395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.071427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.071787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.071819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.072189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.072223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.072581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.072614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.072959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.072990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.073345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.073379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.073731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.073762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.074127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.074179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.074560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.074592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.074934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.074966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.075314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.075348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.075680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.075712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.076052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.076084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.076440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.076474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.076826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.076860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.077202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.077236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.077593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.077625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.077989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.078020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.078386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.078419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.078780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.078811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.995 qpair failed and we were unable to recover it. 00:30:03.995 [2024-11-20 07:28:26.079180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.995 [2024-11-20 07:28:26.079213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.079561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.079593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.079958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.079989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.080354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.080387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.080789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.080820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.081156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.081199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.081561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.081595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.081820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.081851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.082201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.082233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.082633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.082665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.082894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.082929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.083335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.083367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.083717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.083749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.084005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.084039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.084371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.084405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.084748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.084780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.085134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.085175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.085528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.085558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.085916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.085948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.086282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.086314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.086668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.086698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.087053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.087084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.087450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.087483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.087856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.087889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.088122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.088167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.088521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.088551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.088913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.088946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.089298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.089330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.089692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.089725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.090063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.090096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.090447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.090480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.090721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.090751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.091124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.996 [2024-11-20 07:28:26.091155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.996 qpair failed and we were unable to recover it. 00:30:03.996 [2024-11-20 07:28:26.091567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.091598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.091947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.091978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.092333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.092368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.092723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.092754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.093117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.093148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.093522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.093555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.093899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.093930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.094295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.094327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.094684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.094724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.095057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.095088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.095454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.095487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.095839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.095869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.096227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.096259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.096643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.096675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.097038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.097070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.097448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.097481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.097713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.097744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.098083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.098115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.098476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.098511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.098873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.098904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.099255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.099288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.099550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.099581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.099948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.099983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.100341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.100375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.100722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.100755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.101142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.101183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.101495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.101528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.101874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.101905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.102249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.102282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.102656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.102687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.103037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.103070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.103408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.103440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.103792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.103825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.104179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.104213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.104610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.104642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.105022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.105060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.105420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.105454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.105817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.997 [2024-11-20 07:28:26.105849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.997 qpair failed and we were unable to recover it. 00:30:03.997 [2024-11-20 07:28:26.106091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.106122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.106367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.106403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.106801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.106833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.107195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.107230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.107550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.107583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.107921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.107954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.108310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.108342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.108694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.108725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.109089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.109121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.109476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.109510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.109853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.109884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.110251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.110284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.110648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.110678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.111038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.111071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.111422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.111454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.111799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.111831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.112189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.112221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.112584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.112614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.112965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.112996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.113383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.113416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.113758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.113789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.114147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.114187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.114553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.114583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.114938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.114969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.115322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.115368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.115746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.115779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.116042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.116071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.116419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.116453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.116751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.116781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.117143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.117185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.117445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.117478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.117824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.117854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.118215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.118250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.118665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.118695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.118940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.118974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.119327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.119360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.119734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.119767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.120129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.998 [2024-11-20 07:28:26.120171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.998 qpair failed and we were unable to recover it. 00:30:03.998 [2024-11-20 07:28:26.120559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.120591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.120836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.120868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.121208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.121242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.121620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.121651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.122009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.122041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.122418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.122451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.122818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.122850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.123214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.123249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.123662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.123692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.124052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.124083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.124453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.124489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.124703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.124734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.125005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.125035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.125444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.125476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.125828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.125861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.126143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.126183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.126561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.126592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.126937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.126969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.127318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.127351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.127596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.127626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.127986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.128017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.128394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.128427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.128766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.128798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.129147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.129189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.129548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.129580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.132353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.132431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.132867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.132902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.133290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.133326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.133689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.133720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.134069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.134102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.134484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.134518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.134877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.134908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.135276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.135311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.135691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.135722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.136096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.136131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:03.999 [2024-11-20 07:28:26.136533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.999 [2024-11-20 07:28:26.136568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:03.999 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.136917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.136951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.137311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.137344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.137712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.137744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.138091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.138124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.138421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.138452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.138800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.138835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.139079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.139111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.139489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.139523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.139919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.139951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.140289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.140322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.140704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.140742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.140981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.141012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.141354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.141388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.141749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.141780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.142168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.142202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.142571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.142602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.142953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.142986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.143347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.143381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.143745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.143784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.144122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.144153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.144548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.144580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.144831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.144861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.145237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.145270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.145653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.145688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.145930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.145962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.146307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.146340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.146697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.146731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.147105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.147136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.147532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.147566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.147799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.147830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.148114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.000 [2024-11-20 07:28:26.148148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.000 qpair failed and we were unable to recover it. 00:30:04.000 [2024-11-20 07:28:26.148402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.148437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.148819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.148857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.149243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.149277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.149628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.149659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.150015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.150045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.150331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.150362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.150647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.150678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.151045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.151077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.151406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.151438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.151880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.151913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.152272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.152305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.152681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.152712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.153063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.153094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.153404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.153439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.153810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.153848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.154076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.154113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.154504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.154539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.154879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.154912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.155280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.155314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.155671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.155702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.156057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.156088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.156375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.156407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.156751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.156781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.157011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.157041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.157356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.157390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.157756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.157789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.158146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.158215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.158570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.158600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.158966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.158997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.159338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.159369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.159732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.159764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.001 [2024-11-20 07:28:26.160184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.001 [2024-11-20 07:28:26.160216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.001 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.160567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.160607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.160952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.160983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.161296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.161328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.161559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.161590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.161845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.161876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.162112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.162145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.162546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.162578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.162951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.162984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.163239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.163272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.163627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.163657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.164027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.164058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.164321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.164353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.164599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.164629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.164975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.165008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.165384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.165418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.165844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.165875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.166240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.166273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.166636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.166668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.166913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.166944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.167315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.167346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.167703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.167737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.168096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.168127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.168495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.168528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.168866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.168898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.169124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.169155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.169450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.169482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.169837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.169868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.170233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.170269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.170639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.170672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.002 [2024-11-20 07:28:26.171039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.002 [2024-11-20 07:28:26.171071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.002 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.171442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.171474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.171828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.171862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.172221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.172253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.172597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.172627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.172970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.173002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.173366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.173399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.173752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.173783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.174155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.174202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.174557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.174590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.174962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.174992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.175336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.175368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.175727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.175760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.176121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.176153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.176513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.176544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.176896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.176927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.177288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.177322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.177683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.177714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.178074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.178108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.178472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.178505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.178735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.178765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.179134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.179183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.179520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.179552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.179820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.179851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.180251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.180284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.180522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.180552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.180918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.180951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.181322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.181355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.181593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.181625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.181925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.181957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.182203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.182237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.182587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.182618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.182971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.183003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.183234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.183266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.183681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.183714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.184067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.184098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.184456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.184489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.184900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.184931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.185282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.003 [2024-11-20 07:28:26.185315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.003 qpair failed and we were unable to recover it. 00:30:04.003 [2024-11-20 07:28:26.185663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.185694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.185925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.185955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.186344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.186376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.186610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.186640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.187013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.187045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.187320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.187351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.187581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.187612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.187829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.187860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.188217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.188249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.188623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.188661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.189025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.189059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.189409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.189441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.189694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.189724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.189948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.189979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.190309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.190341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.190721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.190752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.191128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.191170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.191543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.191574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.191947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.191980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.192333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.192365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.192588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.192618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.192940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.192971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.193349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.193381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.193766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.193799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.193925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.193961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.194350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.194382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.194613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.194644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.195030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.004 [2024-11-20 07:28:26.195061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.004 qpair failed and we were unable to recover it. 00:30:04.004 [2024-11-20 07:28:26.195439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.195471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.195818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.195851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.196203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.196238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.196591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.196624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.196851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.196882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.197221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.197256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.197619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.197653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.197998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.198030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.198381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.198421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.198758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.198789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.199045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.199075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.199418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.199454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.199815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.199847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.200236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.200268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.200648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.200689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.201118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.201190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.201624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.201678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.202086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.202133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.202541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.202584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.202974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.203007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.203334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.203368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.203768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.203800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.204049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.204085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.204414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.204448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.204799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.204830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.205071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.205102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.205383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.205416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.205639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.205669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.206032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.206063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.206313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.206345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.206707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.206738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.207085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.207117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.207539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.207573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.207969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.208001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.208371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.208404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.208787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.208820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.209184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.209218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.209592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.209622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.005 qpair failed and we were unable to recover it. 00:30:04.005 [2024-11-20 07:28:26.209847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.005 [2024-11-20 07:28:26.209877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.210198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.210231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.210595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.210626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.210881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.210911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.211282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.211315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.211729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.211760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.212106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.212139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.212543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.212577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.212955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.212989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.213226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.213259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.213654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.213685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.214020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.214059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.214450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.214482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.214838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.214869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.215219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.215250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.215607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.215638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.215980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.216011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.216373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.216407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.216846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.216877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.217134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.217174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.217537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.217569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.217921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.217953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.218310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.218342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.218705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.006 [2024-11-20 07:28:26.218737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.006 qpair failed and we were unable to recover it. 00:30:04.006 [2024-11-20 07:28:26.219092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.219125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.219408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.219441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.219786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.219818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.220154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.220210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.220592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.220623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.221012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.221044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.221411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.221444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.221811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.221843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.222194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.222227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.222587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.222620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.222986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.223016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.223403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.223437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.223708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.223739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.224113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.224145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.224508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.224546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.224899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.224931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.225283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.225315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.225688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.225720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.226092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.226125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.226387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.226423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.226816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.226848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.227183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.227217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.227581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.227612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.227949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.227980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.228352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.228383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.228742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.228773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.229123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.229155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.229457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.229488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.229778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.229809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.230177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.230209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.230574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.230605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.230925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.230955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.231308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.231341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.231716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.231747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.007 [2024-11-20 07:28:26.231996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.007 [2024-11-20 07:28:26.232026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.007 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.232414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.232446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.232844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.232877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.233220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.233252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.233603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.233634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.233999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.234030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.234393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.234426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.234789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.234828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.235213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.235247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.235611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.235647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.236042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.236074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.236411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.236442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.236817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.236848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.237232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.237265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.237636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.237668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.238000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.238031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.238403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.238435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.238792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.238822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.239080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.239111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.239469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.239504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.239866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.239896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.240152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.240206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.240555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.240587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.240936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.240968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.241217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.241253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.241498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.241532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.241888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.241919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.242316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.242348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.242595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.242625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.008 [2024-11-20 07:28:26.242961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.008 [2024-11-20 07:28:26.242992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.008 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.243317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.243349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.243700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.243733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.243966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.243996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.244353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.244386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.244747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.244782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.245144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.245185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.245414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.245449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.245820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.245853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.246214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.246248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.246613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.246645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.247014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.247047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.247407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.247441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.247794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.247826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.248197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.248231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.248606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.248637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.248995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.249027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.249416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.249449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.249820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.249852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.250254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.250288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.250649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.250681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.251038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.251070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.251408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.251442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.251839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.251870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.252112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.252143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.252527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.252559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.252925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.252957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.339 [2024-11-20 07:28:26.253316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.339 [2024-11-20 07:28:26.253347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.339 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.253684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.253716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.254057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.254090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.254456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.254489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.254900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.254932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.255270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.255304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.255683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.255716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.256071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.256105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.256495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.256527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.256785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.256815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.257196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.257229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.257577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.257608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.257967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.257999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.258265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.258297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.258671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.258704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.259100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.259131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.259550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.259584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.259928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.259962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.260305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.260338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.260727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.260765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.261001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.261032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.261446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.261478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.261817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.261849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.262209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.262241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.262622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.262653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.262892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.262922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.263249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.263282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.263630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.263662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.263909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.263940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.264300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.264334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.264696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.264727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.265073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.265106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.265424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.265457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.265716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.265749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.266093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.266125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.266533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.266565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.266920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.266953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.267219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.267252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.340 [2024-11-20 07:28:26.267500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.340 [2024-11-20 07:28:26.267531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.340 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.267854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.267884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.268248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.268281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.268655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.268688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.269033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.269064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.269367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.269398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.269755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.269786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.270124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.270157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.270543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.270581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.270891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.270926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.271270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.271303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.271670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.271701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.271950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.271979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.272277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.272309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.272673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.272704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.273039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.273071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.273469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.273502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.273872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.273903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.274203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.274236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.274607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.274638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.275019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.275050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.275387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.275421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.275665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.275699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.276035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.276068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.276441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.276473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.276712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.276743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.277095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.277125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.277416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.277454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.277833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.277865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.278214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.278247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.278615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.278646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.278997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.279029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.279386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.279420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.279657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.279687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.280031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.280063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.280486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.280526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.280881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.280914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.281180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.281215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.341 qpair failed and we were unable to recover it. 00:30:04.341 [2024-11-20 07:28:26.281590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.341 [2024-11-20 07:28:26.281622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.281756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.281790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.282183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.282216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.282593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.282624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.282853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.282882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.283287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.283321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.283669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.283702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.284056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.284087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.284441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.284475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.284832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.284863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.285234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.285268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.285647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.285679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.285959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.285990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.286431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.286463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.286802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.286835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.287190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.287224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.287592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.287622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.287987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.288017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.288270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.288302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.288698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.288729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.289175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.289208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.289541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.289571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.289937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.289968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.290340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.290374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.290716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.290746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.291098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.291130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.291628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.291660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.292048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.292080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.292518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.292551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.292899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.292930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.293282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.293314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.293680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.293711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.294004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.294034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.294417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.294449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.294810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.294843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.295173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.295204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.295573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.295605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.295945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.295976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.342 qpair failed and we were unable to recover it. 00:30:04.342 [2024-11-20 07:28:26.296320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.342 [2024-11-20 07:28:26.296359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.296626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.296657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.296997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.297029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.297448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.297482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.297828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.297861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.298134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.298177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.298457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.298488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.298748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.298779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.299130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.299172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.299538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.299569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.300008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.300040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.300291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.300324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.300682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.300714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.300941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.300975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.301287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.301319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.301683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.301715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.302071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.302103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.302472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.302505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.302851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.302882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.303131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.303170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.303524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.303556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.303897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.303930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.304157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.304200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.304578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.304609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.304958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.304989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.305305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.305339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.305602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.305633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.305978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.306015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.306255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.306286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.306513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.306548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.306905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.306937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.307295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.343 [2024-11-20 07:28:26.307329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.343 qpair failed and we were unable to recover it. 00:30:04.343 [2024-11-20 07:28:26.307695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.307727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.308087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.308117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.308386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.308418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.308771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.308802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.309176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.309209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.309492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.309524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.309880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.309911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.310292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.310326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.310701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.310733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.311104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.311135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.311570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.311602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.311950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.311983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.312346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.312378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.312772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.312804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.313017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.313048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.313426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.313458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.313801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.313833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.314198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.314230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.314589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.314620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.314981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.315013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.315381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.315414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.315807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.315838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.316201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.316239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.316603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.316636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.316997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.317028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.317424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.317458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.317782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.317817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.318224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.318258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.318599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.318631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.318993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.319024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.319305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.319336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.319698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.319730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.320082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.320114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.320444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.320476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.320831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.320865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.321242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.321273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.321639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.344 [2024-11-20 07:28:26.321672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.344 qpair failed and we were unable to recover it. 00:30:04.344 [2024-11-20 07:28:26.321928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.321962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.322314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.322348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.322705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.322737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.323143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.323184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.323517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.323548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.323898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.323930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.324201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.324234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.324591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.324623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.324952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.324982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.325240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.325272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.325632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.325662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.326015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.326048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.326381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.326412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.326770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.326802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.327085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.327116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.327513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.327546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.327858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.327890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.328242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.328273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.328652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.328683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.329050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.329081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.329431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.329463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.329816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.329847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.330104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.330137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.330497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.330529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.330894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.330926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.331320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.331352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.331723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.331754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.332105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.332137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.332400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.332432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.332789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.332821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.333206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.333239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.333624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.333655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.333929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.333958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.334192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.334223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.334628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.334661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.335028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.335061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.335411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.335444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.335802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.335834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.345 qpair failed and we were unable to recover it. 00:30:04.345 [2024-11-20 07:28:26.336230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.345 [2024-11-20 07:28:26.336262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.336643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.336674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.337075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.337108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.337466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.337497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.337851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.337883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.338205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.338236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.338626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.338659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.339008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.339039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.339399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.339432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.339780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.339810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.340202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.340234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.340495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.340525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.340882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.340913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.341240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.341274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.341632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.341662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.341911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.341949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.342322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.342355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.342715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.342747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.343070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.343102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.343480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.343512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.343858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.343890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.344136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.344176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.344545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.344576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.344929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.344961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.345242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.345277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.345666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.345698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.346088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.346121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.346452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.346484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.346853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.346885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.347240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.347273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.347671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.347702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.348053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.348086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.348357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.348392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.348767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.348797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.349184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.349217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.349608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.349640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.350085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.350117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.350508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.346 [2024-11-20 07:28:26.350543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.346 qpair failed and we were unable to recover it. 00:30:04.346 [2024-11-20 07:28:26.350856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.350889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.351193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.351226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.351603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.351636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.351993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.352027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.352453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.352492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.352843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.352876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.353234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.353266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.353622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.353653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.354004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.354036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.354467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.354500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.354853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.354884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.355180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.355211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.355595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.355626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.355997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.356029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.356280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.356311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.356660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.356691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.357086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.357117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.357377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.357412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.357661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.357693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.358040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.358072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.358460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.358493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.358837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.358869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.359219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.359250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.359632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.359662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.359989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.360022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.360271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.360304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.360688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.360720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.361084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.361116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.361507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.361541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.361900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.361933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.362184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.362219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.362605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.362643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.363003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.363035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.363375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.363407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.363769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.363801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.364186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.364219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.364582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.364614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.364967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.347 [2024-11-20 07:28:26.365000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.347 qpair failed and we were unable to recover it. 00:30:04.347 [2024-11-20 07:28:26.365291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.365324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.365600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.365630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.366024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.366055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.366472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.366505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.366867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.366898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.367243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.367276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.367661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.367692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.368046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.368079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.368417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.368451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.368813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.368845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.369200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.369232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.369610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.369642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.369980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.370013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.370268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.370301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.370694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.370726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.371136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.371176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.371611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.371644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.371924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.371958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.372279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.372310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.372695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.372725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.373093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.373125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.373539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.373572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.373948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.373980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.374410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.374445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.374784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.374816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.375174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.375207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.375586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.375618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.375961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.375994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.376266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.376299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.376528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.376561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.376965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.376997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.377252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.377282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.377637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.348 [2024-11-20 07:28:26.377669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.348 qpair failed and we were unable to recover it. 00:30:04.348 [2024-11-20 07:28:26.378021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.378053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.378303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.378341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.378691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.378722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.379118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.379149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.379543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.379575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.379953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.379985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.380239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.380271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.380645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.380676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.381034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.381065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.381442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.381475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.381831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.381863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.382214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.382247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.382608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.382640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.382987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.383020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.383363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.383394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.383747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.383778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.384130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.384167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.384447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.384478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.384881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.384911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.385296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.385329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.385669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.385703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.386058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.386089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.386467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.386500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.386863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.386895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.387253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.387284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.387651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.387682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.388032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.388063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.388406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.388441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.388790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.388826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.389067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.389097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.389473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.389505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.389728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.389759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.390081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.390113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.390471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.390503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.390741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.390772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.391117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.391149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.391528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.391561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.391950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.349 [2024-11-20 07:28:26.391982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.349 qpair failed and we were unable to recover it. 00:30:04.349 [2024-11-20 07:28:26.392272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.392303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.392623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.392654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.392911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.392942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.393304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.393336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.393571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.393602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.393883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.393914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.394272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.394305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.394670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.394700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.394942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.394971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.395317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.395349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.395624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.395653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.395970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.396002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.396374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.396405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.396736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.396768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.397198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.397231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.397487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.397518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.397864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.397894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.398242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.398282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.398643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.398673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.399023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.399055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.399351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.399383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.399774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.399806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.399976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.400007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.400370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.400402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.400775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.400808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.401198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.401231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.401590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.401622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.402024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.402055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.402474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.402506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.402862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.402894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.403307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.403338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.403744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.403776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.404013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.404044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.404428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.404462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.404805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.404838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.405204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.405236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.405603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.405636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.405911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.350 [2024-11-20 07:28:26.405942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.350 qpair failed and we were unable to recover it. 00:30:04.350 [2024-11-20 07:28:26.406210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.406242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.406594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.406626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.406978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.407011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.407375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.407407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.407749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.407782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.408020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.408052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.408432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.408466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.408833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.408865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.409230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.409264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.409662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.409692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.410060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.410093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.410381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.410413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.410720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.410750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.411138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.411177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.411519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.411550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.411808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.411839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.412120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.412151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.412535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.412567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.412931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.412963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.413306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.413339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.413730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.413762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.414121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.414153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.414527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.414558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.414946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.414978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.415293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.415324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.415701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.415732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.416011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.416042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.416461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.416493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.416839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.416871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.417021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.417059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.417456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.417489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.417842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.417875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.418204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.418235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.418600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.418632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.419054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.419086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.419482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.419520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.419848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.419876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.420196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.420227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.351 [2024-11-20 07:28:26.420526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.351 [2024-11-20 07:28:26.420556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.351 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.420927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.420958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.421311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.421344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.421592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.421627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.421970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.422000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.422242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.422273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.422628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.422657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.423007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.423037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.423310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.423342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.423481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.423516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.423881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.423913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.424245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.424278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.424399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.424429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.424820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.424849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.425234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.425266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.425630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.425659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.426003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.426034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.426419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.426451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.426799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.426829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.427128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.427173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.427310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.427341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.427704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.427733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.428124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.428153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.428428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.428461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.428842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.428871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.429130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.429183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.429557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.429587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.429967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.429996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.430370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.430400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.430761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.430790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.431001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.431029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.431393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.431424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.431806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.431835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.432074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.432103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.432548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.432581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.432938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.432968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.433351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.433390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.433737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.433766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.352 [2024-11-20 07:28:26.434069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.352 [2024-11-20 07:28:26.434098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.352 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.434461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.434492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.434849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.434878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.435233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.435264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.435624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.435654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.436003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.436032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.436302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.436332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.436675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.436705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.437097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.437127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.437510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.437540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.437931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.437959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.438373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.438404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.438766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.438796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.439025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.439054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.439480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.439510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.439872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.439903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.440285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.440315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.440572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.440601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.440971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.441000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.441370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.441402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.441798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.441827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.442151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.442193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.442597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.442626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.443007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.443037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.443371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.443402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.443758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.443789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.444154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.444196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.444558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.444587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.444933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.444964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.445303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.445334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.445624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.445653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.445914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.445943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.446218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.353 [2024-11-20 07:28:26.446248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.353 qpair failed and we were unable to recover it. 00:30:04.353 [2024-11-20 07:28:26.446599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.446637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.446876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.446906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.447328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.447359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.447717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.447746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.448111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.448141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.448513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.448543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.448908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.448938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.449300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.449329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.449703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.449733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.450103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.450132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.450495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.450524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.450790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.450820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.451218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.451250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.451602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.451632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.451890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.451919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.452291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.452322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.452655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.452683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.453045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.453074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.453453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.453484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.453858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.453887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.454135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.454174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.454562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.454592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.454965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.454994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.455370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.455401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.455718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.455748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.455955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.455985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.456379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.456410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.456775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.456805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.457188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.457219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.457609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.457638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.458002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.458032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.458425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.458456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.458875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.458904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.459236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.459272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.459633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.459662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.460034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.460063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.460320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.460349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.460731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.460760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.354 qpair failed and we were unable to recover it. 00:30:04.354 [2024-11-20 07:28:26.461097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-11-20 07:28:26.461125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.461524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.461556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.461910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.461942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.462298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.462328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.462697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.462726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.463075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.463104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.463500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.463531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.463869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.463899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.464240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.464277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.464618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.464649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.465023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.465052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.465396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.465426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.465791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.465821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.466183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.466215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.466576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.466606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.466945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.466973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.467323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.467354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.467716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.467746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.468152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.468197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.468558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.468588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.468934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.468962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.469358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.469388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.469719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.469754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.470091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.470120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.470521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.470552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.470908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.470936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.471292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.471323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.471699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.471729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.472074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.472103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.472440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.472471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.472816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.472846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.473220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.473252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.473607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.473636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.473865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.473893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.474244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.474273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.474590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.474618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.474976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.475007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.475243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.475273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.355 [2024-11-20 07:28:26.475615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-11-20 07:28:26.475644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.355 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.476014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.476043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.476284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.476317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.476685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.476715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.477038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.477068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.477445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.477476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.477811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.477841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.478133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.478173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.478419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.478448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.478795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.478824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.479195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.479228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.479579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.479622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.479897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.479925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.480274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.480305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.480671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.480703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.481065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.481095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.481422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.481454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.481691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.481722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.482108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.482138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.482488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.482517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.482875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.482904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.483184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.483217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.483464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.483494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.483756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.483786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.484138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.484180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.484571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.484602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.484938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.484968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.485357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.485389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.485747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.485777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.486150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.486193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.486569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.486598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.486930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.486960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.487342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.487373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.487629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.487660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.488023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.488053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.488420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.488452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.488814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.488844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.489195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-11-20 07:28:26.489225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.356 qpair failed and we were unable to recover it. 00:30:04.356 [2024-11-20 07:28:26.489606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.489636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.489979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.490008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.490353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.490383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.490733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.490763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.490994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.491027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.491394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.491424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.491657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.491686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.492029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.492058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.492443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.492473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.492845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.492875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.493232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.493262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.493512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.493541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.493880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.493910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.494261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.494292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.494520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.494552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.494781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.494812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.495157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.495199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.495552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.495581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.495893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.495922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.496255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.496286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.496644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.496674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.497009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.497039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.497402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.497435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.497782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.497811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.498189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.498221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.498574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.498603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.498938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.498967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.499304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.499334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.499677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.499710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.500064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.500094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.500477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.500508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.500749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.500778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.501152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.501199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.357 [2024-11-20 07:28:26.501558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.357 [2024-11-20 07:28:26.501596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.357 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.501933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.501962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.502312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.502344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.502674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.502702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.503036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.503066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.503434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.503464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.503803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.503833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.504179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.504210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.504454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.504488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.504885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.504913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.505272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.505303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.505666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.505695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.506077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.506107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.506500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.506531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.506858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.506886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.507222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.507253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.507626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.507656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.508009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.508037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.508331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.508361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.508719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.508748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.509115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.509145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.509505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.509534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.509900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.509929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.510309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.510340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.510574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.510603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.510963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.510992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.511359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.511390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.511814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.511844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.512245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.512276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.512649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.512678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.513051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.513081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.513429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.513461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.513828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.513858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.514224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.514254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.514630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.514659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.515010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.515047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.515405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.515436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.515780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.358 [2024-11-20 07:28:26.515809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.358 qpair failed and we were unable to recover it. 00:30:04.358 [2024-11-20 07:28:26.516179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.516209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.516442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.516471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.516851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.516880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.517226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.517256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.517654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.517684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.518025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.518054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.518416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.518447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.518829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.518857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.519218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.519248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.519605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.519634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.519961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.519991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.520337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.520368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.520760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.520790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.521131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.521171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.521565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.521594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.521946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.521975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.522339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.522370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.522700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.522729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.523070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.523099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.523512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.523544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.523907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.523936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.524296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.524326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.524699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.524728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.525088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.525116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.525288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.525318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.525688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.525718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.526082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.526111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.526543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.526574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.526932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.526961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.527302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.527333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.527722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.527752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.528096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.528124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.528490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.528520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.528882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.528912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.529313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.529344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.529704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.529733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.530094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.530125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.530487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.530517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.359 qpair failed and we were unable to recover it. 00:30:04.359 [2024-11-20 07:28:26.530862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.359 [2024-11-20 07:28:26.530894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.531245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.531276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.531583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.531612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.531978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.532008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.532337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.532367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.532739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.532769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.533037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.533066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.533487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.533517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.533869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.533905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.534241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.534271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.534663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.534694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.535065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.535095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.535450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.535483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.535833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.535863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.536240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.536272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.536632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.536661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.537040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.537069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.537427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.537456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.537819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.537848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.538218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.538250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.538618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.538647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.538883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.538915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.539254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.539285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.539656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.539686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.540030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.540060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.540407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.540439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.540776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.540805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.541146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.541193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.541564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.541593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.541931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.541962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.542317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.542347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.542721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.542750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.543113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.543142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.543513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.543543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.543820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.543848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.544184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.544215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.544580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.544616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.544933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.544964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.360 [2024-11-20 07:28:26.545186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.360 [2024-11-20 07:28:26.545238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.360 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.545528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.545558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.545884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.545914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.546273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.546304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.546672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.546702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.547056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.547086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.547431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.547461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.547836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.547866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.548230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.548260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.548638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.548668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.549059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.549090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.549405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.549435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.549802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.549831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.550230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.550261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.550607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.550636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.550871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.550899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.551242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.551279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.551616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.551646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.551983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.552012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.552384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.552417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.552698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.552727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.553099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.553129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.553517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.553547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.553916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.553945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.554281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.554310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.554689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.554717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.555103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.555133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.555479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.555509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.555884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.555914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.556283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.556313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.556658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.556688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.557062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.557091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.557466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.557497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.557884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.557914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.558245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.558276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.558609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.558639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.558968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.558998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.559326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.559357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.559615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.361 [2024-11-20 07:28:26.559643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.361 qpair failed and we were unable to recover it. 00:30:04.361 [2024-11-20 07:28:26.559991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.560021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.560385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.560418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.562718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.562790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.563193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.563229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.563571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.563610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.563899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.563929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.564243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.564274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.564660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.564691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.565058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.565089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.565441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.565473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.565880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.565909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.566268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.566299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.566658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.566688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.567049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.567079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.567440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.567471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.567824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.567854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.568217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.568249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.568631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.568663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.569037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.569067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.569397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.569430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.569768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.569797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.570181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.570214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.570574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.570603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.570837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.570869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.571223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.571255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.571594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.571625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.571964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.571994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.572372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.572405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.572754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.572784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.573193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.573226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.573501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.573530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.573877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.573908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.574315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.574348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.362 [2024-11-20 07:28:26.574710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.362 [2024-11-20 07:28:26.574741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.362 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.575144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.575184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.575455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.575490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.575829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.575861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.576213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.576243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.576586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.576615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.576998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.577028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.577458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.577488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.577853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.577883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.578239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.578270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.578677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.578706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.578955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.578984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.579349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.579382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.579703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.579734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.580078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.580108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.580492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.580523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.580884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.580912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.581186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.581222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.581579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.581608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.581994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.582024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.582360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.582391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.582629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.582662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.583013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.583042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.583409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.583440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.583810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.583840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.584195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.584227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.584498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.584527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.584869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.584898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.585295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.585325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.363 [2024-11-20 07:28:26.585668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.363 [2024-11-20 07:28:26.585697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.363 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.586054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.586087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.586434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.586467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.586803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.586833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.587176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.587207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.587558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.587598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.587947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.587976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.588352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.588384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.588630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.588659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.589018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.589047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.589289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.589331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.589760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.589790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.590147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.590193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.590544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.590573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.590899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.590928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.591272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.591304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.591607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.591636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.591997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.592026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.592386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.592416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.592766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.592796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.593184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.593214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.593589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.593618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.593976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.594005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.594335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.594365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.594740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.594770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.644 [2024-11-20 07:28:26.595142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.644 [2024-11-20 07:28:26.595188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.644 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.595545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.595574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.595907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.595938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.596208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.596239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.596578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.596607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.596938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.596975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.597318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.597349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.597723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.597752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.598155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.598201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.598554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.598585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.598928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.598957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.599295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.599335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.599691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.599726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.600037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.600066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.600448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.600479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.600803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.600834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.601185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.601216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.601537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.601566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.601944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.601973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.602312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.602342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.602698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.602727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.603088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.603118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.603489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.603527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.603856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.603886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.604262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.604294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.604653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.604683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.604999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.605071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.605426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.605458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.605825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.605858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.606200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.606233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.606578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.606606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.606950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.606979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.607320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.607349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.607704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.607733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.608097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.608127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.608523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.608555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.608895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.608925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.609258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.609289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.645 [2024-11-20 07:28:26.609657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.645 [2024-11-20 07:28:26.609687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.645 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.610066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.610097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.610490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.610524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.610913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.610943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.611303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.611334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.611694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.611724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.612105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.612134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.612467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.612497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.612862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.612892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.613271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.613304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.613655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.613684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.614055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.614083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.614451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.614482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.614830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.614860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.615239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.615271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.615602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.615632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.616016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.616047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.616411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.616442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.616767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.616799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.617035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.617070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.617462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.617493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.617835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.617865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.618222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.618252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.618579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.618610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.618949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.618978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.619236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.619266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.619598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.619629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.619968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.619997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.620352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.620383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.620717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.620747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.621084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.621114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.621509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.621541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.621919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.621950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.622289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.622320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.622569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.622602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.622932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.622962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.623321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.623353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.623727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.623756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.646 qpair failed and we were unable to recover it. 00:30:04.646 [2024-11-20 07:28:26.624107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.646 [2024-11-20 07:28:26.624137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.624510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.624540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.624863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.624894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.625237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.625268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.625597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.625634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.625984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.626014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.626389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.626420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.626762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.626790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.627171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.627203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.627550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.627582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.627918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.627947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.628325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.628359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.628714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.628745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.629072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.629102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.629459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.629491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.629843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.629874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.630216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.630246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.630597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.630628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.630995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.631026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.631385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.631418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.631767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.631796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.632185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.632217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.632570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.632599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.632955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.632985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.633401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.633432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.633765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.633794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.634150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.634192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.634531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.634559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.634938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.634968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.635341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.635373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.635691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.635721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.636088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.636131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.636519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.636550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.636902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.636933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.637269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.637301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.637640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.637671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.638017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.638046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.647 [2024-11-20 07:28:26.638406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.647 [2024-11-20 07:28:26.638438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.647 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.638800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.638829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.639201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.639233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.639628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.639657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.639895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.639928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.640276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.640308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.640659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.640689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.640993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.641023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.641394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.641428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.641837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.641868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.642106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.642140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.642531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.642563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.642928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.642958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.643308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.643340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.643589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.643619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.643988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.644018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.644649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.644689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.645038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.645074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.645404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.645437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.645786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.645817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.646182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.646213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.646570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.646608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.646984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.647016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.647390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.647422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.647761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.647792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.648146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.648196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.648609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.648639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.648885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.648920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.649288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.649325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.649594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.649625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.649977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.650008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.650368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.650399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.650758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.650788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.651148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.648 [2024-11-20 07:28:26.651203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.648 qpair failed and we were unable to recover it. 00:30:04.648 [2024-11-20 07:28:26.651560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.651591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.651927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.651957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.652352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.652384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.652737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.652767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.653130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.653175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.653605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.653635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.654049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.654079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.654425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.654457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.654817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.654847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.655010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.655041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.655430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.655461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.655820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.655850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.656193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.656225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.656644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.656675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.657038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.657069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.657471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.657504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.657849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.657881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.658261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.658293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.658605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.658638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.658979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.659010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.659342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.659374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.659734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.659766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.660114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.660145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.660552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.660583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.660844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.660875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.661237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.661268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.661585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.661616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.661958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.661990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.662286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.662318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.662582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.662613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.662956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.662988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.663428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.663460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.663751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.663780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.664011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.664043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.664299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.664331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.664692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.664721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.649 [2024-11-20 07:28:26.665076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.649 [2024-11-20 07:28:26.665106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.649 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.665489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.665521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.665877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.665909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.666243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.666276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.666648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.666679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.667030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.667060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.667410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.667443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.667775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.667806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.668036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.668070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.668337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.668373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.668658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.668693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.669058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.669090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.669468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.669500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.669848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.669878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.670232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.670264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.670638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.670670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.671023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.671054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.671415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.671447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.671796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.671826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.672185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.672223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.672568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.672599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.672964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.672995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.673390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.673422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.673776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.673807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.674197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.674231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.674607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.674638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.675008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.675041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.675284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.675316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.675699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.675729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.676108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.676139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.676505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.676537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.676755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.676786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.677147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.677193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.677457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.677487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.677844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.677874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.678236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.678268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.678587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.678616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.678978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.679007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.650 [2024-11-20 07:28:26.679367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.650 [2024-11-20 07:28:26.679400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.650 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.679757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.679787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.680137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.680183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.680546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.680576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.680953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.680982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.681216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.681247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.681641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.681671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.682013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.682043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.682395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.682433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.682775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.682804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.683187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.683221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.683609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.683639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.683996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.684025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.684391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.684423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.684754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.684784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.685138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.685181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.685427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.685458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.685825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.685860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.686212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.686252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.686520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.686553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.686915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.686945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.687293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.687324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.687719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.687749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.688009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.688038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.688438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.688470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.688815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.688845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.689090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.689123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.689510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.689541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.689894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.689923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.690285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.690316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.690681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.690711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.691065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.691095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.691517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.691550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.691767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.691796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.692178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.692211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.692599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.692630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.693008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.693040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.693393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.693425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.693772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.651 [2024-11-20 07:28:26.693803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.651 qpair failed and we were unable to recover it. 00:30:04.651 [2024-11-20 07:28:26.694082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.694112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.694496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.694528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.694858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.694887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.695212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.695244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.695641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.695671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.696018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.696048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.696396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.696429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.696689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.696724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.697011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.697041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.697266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.697299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.697675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.697706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.698041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.698071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.698444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.698475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.698851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.698882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.699235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.699266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.699649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.699679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.700012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.700042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.700282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.700313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.700692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.700723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.701062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.701091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.701419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.701450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.701799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.701829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.702205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.702238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.702572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.702601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.702932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.702963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.703206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.703238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.703642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.703673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.704021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.704052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.704400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.704432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.704685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.704715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.705056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.705087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.705451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.705483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.705852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.705882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.706136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.706179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.706441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.706472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.706834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.706865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.707218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.707249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.707590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.652 [2024-11-20 07:28:26.707628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.652 qpair failed and we were unable to recover it. 00:30:04.652 [2024-11-20 07:28:26.707974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.708003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.708387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.708419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.708665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.708696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.709029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.709059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.709409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.709442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.709709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.709739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.710064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.710103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.710490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.710521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.710896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.710926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.711157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.711214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.711437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.711466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.711842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.711872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.712265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.712297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.712682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.712712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.712949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.712979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.713337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.713369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.713591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.713620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.713941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.713970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.714336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.714367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.714720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.714750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.715128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.715170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.715496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.715527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.715753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.715781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.716125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.716154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.716530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.716560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.716885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.716914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.717276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.717313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.717700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.717732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.718095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.718124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.718525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.718556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.718904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.718932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.719193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.719226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.719567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.719597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.719833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.653 [2024-11-20 07:28:26.719863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.653 qpair failed and we were unable to recover it. 00:30:04.653 [2024-11-20 07:28:26.720110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.720140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.720535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.720566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.720903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.720933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.721188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.721219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.721570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.721605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.721968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.721999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.722369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.722401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.722734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.722763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.723124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.723153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.723484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.723512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.723878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.723907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.724236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.724266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.724629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.724658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.724905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.724933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.725293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.725325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.725723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.725753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.726127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.726156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.726467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.726497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.726855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.726886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.727239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.727277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.727662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.727690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.728056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.728085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.728425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.728457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.728743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.728772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.729031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.729060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.729403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.729435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.729788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.729819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.730243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.730274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.730597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.730627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.730976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.731005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.731383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.731415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.731771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.731801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.732180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.732210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.732569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.732599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.732949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.732978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.733308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.733338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.733675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.733705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.734071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.654 [2024-11-20 07:28:26.734101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.654 qpair failed and we were unable to recover it. 00:30:04.654 [2024-11-20 07:28:26.734464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.734494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.734868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.734897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.735270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.735301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.735668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.735697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.736069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.736099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.736526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.736557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.736929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.736957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.737320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.737350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.737708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.737738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.738114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.738144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.738523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.738552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.738901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.738930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.739295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.739326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.739708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.739739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.740012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.740041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.740412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.740443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.740794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.740824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.741171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.741202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.741583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.741613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.741986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.742017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.742254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.742285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.742603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.742634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.742976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.743006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.743256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.743287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.743645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.743682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.744018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.744046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.744378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.744407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.744752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.744781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.745156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.745198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.745623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.745653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.746005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.746035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.746390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.746421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.746655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.746683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.747041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.747070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.747402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.747435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.747797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.747826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.748181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.748213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.748568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.655 [2024-11-20 07:28:26.748597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.655 qpair failed and we were unable to recover it. 00:30:04.655 [2024-11-20 07:28:26.748958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.748988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.749343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.749373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.749734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.749763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.750143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.750186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.750613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.750643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.750900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.750928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.751300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.751331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.751692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.751723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.752014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.752043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.752452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.752491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.752824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.752853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.753106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.753140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.753571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.753603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.753946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.753975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.754337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.754368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.754742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.754773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.755182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.755213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.755555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.755584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.755928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.755958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.756312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.756345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.756737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.756766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.757110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.757139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.757529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.757559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.757916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.757948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.758310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.758342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.758683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.758714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.759065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.759093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.759467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.759500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.759840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.759870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.760223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.760256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.760591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.760621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.760954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.760983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.761356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.761388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.761733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.761763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.762121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.762150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.762498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.762528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.762891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.762921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.763263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.763295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.656 qpair failed and we were unable to recover it. 00:30:04.656 [2024-11-20 07:28:26.763652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.656 [2024-11-20 07:28:26.763688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.764070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.764099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.764467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.764498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.764850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.764880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.765239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.765271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.765499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.765529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.765875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.765905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.766316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.766347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.766693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.766722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.767084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.767114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.767487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.767520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.767915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.767944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.768285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.768316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.768624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.768653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.769040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.769070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.769450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.769481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.769813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.769843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.770217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.770248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.770596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.770625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.770977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.771008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.771335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.771365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.771750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.771781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.772133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.772181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.772538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.772568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.772944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.772974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.773345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.773375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.773666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.773694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.774050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.774080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.774421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.774453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.774797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.774826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.775192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.775223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.775575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.775604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.775940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.775970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.776345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.776376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.776747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.776785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.777070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.777100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.777466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.777499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.777838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.777867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.657 [2024-11-20 07:28:26.778199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.657 [2024-11-20 07:28:26.778231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.657 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.778673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.778703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.779045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.779073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.779435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.779466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.779829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.779857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.780208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.780238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.780606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.780636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.781009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.781038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.781399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.781430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.781801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.781831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.782190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.782221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.782565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.782595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.782926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.782956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.783305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.783336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.783711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.783741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.784103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.784132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.784520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.784549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.784909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.784939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.785275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.785315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.785685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.785716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.786084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.786114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.786505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.786537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.786905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.786934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.787288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.787320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.787693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.787723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.788081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.788111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.788476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.788505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.788865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.788894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.789234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.789264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.789616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.789644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.790015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.790059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.790413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.790444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.790822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.790851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.791213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-11-20 07:28:26.791244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.658 qpair failed and we were unable to recover it. 00:30:04.658 [2024-11-20 07:28:26.791624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.791653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.792017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.792046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.792394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.792426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.792768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.792798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.793175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.793207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.793574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.793602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.794002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.794031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.794385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.794417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.794776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.794806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.795154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.795201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.795567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.795596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.795969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.795998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.796347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.796377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.796729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.796759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.796997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.797026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.797351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.797381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.797751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.797780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.798109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.798138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.798404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.798434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.798788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.798818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.799177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.799211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.799568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.799598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.799940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.799968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.800334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.800371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.800750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.800782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.801139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.801181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.801559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.801589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.801951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.801980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.802350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.802382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.802723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.802752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.803127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.803157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.803536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.803565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.803904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.803934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.804270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.804300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.804656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.804685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.805065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.805095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.805481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.805512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.805864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.659 [2024-11-20 07:28:26.805894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.659 qpair failed and we were unable to recover it. 00:30:04.659 [2024-11-20 07:28:26.806229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.806260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.806591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.806620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.807000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.807031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.807407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.807437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.807763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.807792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.808170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.808203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.808549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.808578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.808949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.808978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.809337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.809368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.809732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.809761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.810119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.810149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.810556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.810587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.810932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.810967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.811310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.811341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.811683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.811712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.812107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.812138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.812495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.812525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.812864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.812894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.813242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.813273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.813622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.813653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.814005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.814035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.814398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.814430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.814787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.814816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.815194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.815226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.815576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.815605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.815989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.816019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.816362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.816393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.816792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.816822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.817184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.817217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.817577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.817606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.817941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.817970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.818313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.818343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.818721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.818751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.819113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.819143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.819420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.819450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.819793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.819822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.820169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.820203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.820547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.820575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.660 [2024-11-20 07:28:26.820901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.660 [2024-11-20 07:28:26.820931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.660 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.821275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.821307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.821644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.821674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.821830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.821863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.822232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.822264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.822493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.822522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.822904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.822934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.823294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.823326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.823689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.823718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.824086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.824115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.824521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.824552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.824945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.824975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.825326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.825357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.825745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.825775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.826131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.826169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.826537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.826568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.826948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.826977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.827346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.827377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.827616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.827649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.827992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.828021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.828433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.828465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.828809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.828839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.829207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.829238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.829607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.829636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.830012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.830043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.830409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.830440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.830782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.830812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.831180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.831211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.831590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.831619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.831978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.832009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.832387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.832418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.832773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.832802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.833141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.833185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.833538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.833569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.833913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.833942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.834272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.834303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.834684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.834714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.835080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.835110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.661 [2024-11-20 07:28:26.835499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.661 [2024-11-20 07:28:26.835530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.661 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.835897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.835929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.836269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.836299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.836637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.836668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.836996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.837033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.837351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.837382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.837746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.837775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.838103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.838132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.838484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.838514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.838846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.838875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.839214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.839246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.839490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.839518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.839869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.839897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.840242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.840271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.840629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.840658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.840986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.841017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.841489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.841520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.841865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.841893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.842302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.842333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.842699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.842728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.843072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.843101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.843473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.843504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.843884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.843914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.844145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.844202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.844576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.844606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.844948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.844978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.845339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.845370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.845741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.845771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.846134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.846174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.846534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.846563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.846955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.846985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.847367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.847404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.847797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.847827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.848186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.848220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.848582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.848611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.662 qpair failed and we were unable to recover it. 00:30:04.662 [2024-11-20 07:28:26.848978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.662 [2024-11-20 07:28:26.849006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.849376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.849407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.849754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.849783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.850146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.850191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.850503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.850533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.850875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.850904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.851272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.851304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.851646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.851676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.852011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.852042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.852391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.852423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.852791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.852821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.853194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.853227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.853557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.853587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.853946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.853977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.854331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.854362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.854734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.854762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.855140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.855182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.855532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.855562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.855896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.855925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.856267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.856298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.856547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.856578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.856992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.857023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.857385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.857417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.857767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.857796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.858155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.858198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.858592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.858621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.858988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.859016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.859405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.859436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.859806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.859835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.860188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.860217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.860582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.860611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.860990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.861020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.861404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.861435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.861789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.861819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.862178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.862210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.862570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.862607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.862993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.863022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.863290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.863321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.663 qpair failed and we were unable to recover it. 00:30:04.663 [2024-11-20 07:28:26.863686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.663 [2024-11-20 07:28:26.863715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.864097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.864127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.864505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.864535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.864895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.864925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.865185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.865218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.865570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.865599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.865935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.865963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.866310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.866341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.866677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.866707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.867086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.867115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.867467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.867499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.867842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.867871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.868255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.868287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.868662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.868692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.869043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.869073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.869424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.869455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.869788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.869818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.870171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.870203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.870581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.870612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.870945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.870976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.871230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.871260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.871654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.871684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.872025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.872053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.872401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.872430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.872758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.872789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.873129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.873171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.873534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.873570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.873951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.873980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.874335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.874366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.874734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.874763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.875114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.875144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.875558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.875589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.875847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.875880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.876210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.876241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.876569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.876600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.876941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.876969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.877302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.877334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.877678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.877708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.664 [2024-11-20 07:28:26.878091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.664 [2024-11-20 07:28:26.878120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.664 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.878505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.878543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.878938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.878967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.879342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.879374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.879742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.879771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.880152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.880193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.880422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.880455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.880856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.880886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.881233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.881263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.881641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.881670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.882001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.882032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.882391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.882422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.882766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.882795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.883141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.883184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.883542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.883573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.883943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.883980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.884340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.884372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.884754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.884783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.885135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.885187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.885525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.885555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.885925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.885954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.886207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.886241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.886636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.886666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.887038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.887068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.887423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.887453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.887810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.887839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.888191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.888222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.888599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.888629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.888984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.889014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.889342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.889373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.889718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.889747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.890081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.890111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.890390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.890419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.890806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.890836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.891178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.891212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.891559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.891590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.891964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.891995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.892345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.892376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.892710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.665 [2024-11-20 07:28:26.892748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.665 qpair failed and we were unable to recover it. 00:30:04.665 [2024-11-20 07:28:26.893118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.893148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.893515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.893546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.893912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.893941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.894316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.894354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.894711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.894741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.895123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.895153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.895491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.895521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.895893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.895922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.896277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.896307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.896660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.896689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.897095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.897123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.897503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.897533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.897880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.897910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.898186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.898217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.898597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.898628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.899041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.899070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.899235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.899265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.899635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.899666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.899915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.899945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.900299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.900330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.900700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.900731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.901108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.901138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.901433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.901463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.901713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.901743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.902135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.902179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.902521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.902552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.902829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.902862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.903198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.903229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.903590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.903621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.903960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.903990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.904386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.904418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.904770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.904801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.905145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.905188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.905580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.905610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.905950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.905979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.906331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.906362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.666 [2024-11-20 07:28:26.906760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.666 [2024-11-20 07:28:26.906791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.666 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.907066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.907098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.907489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.907521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.907747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.907776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.908153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.908198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.908554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.908584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.908929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.908959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.909334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.909368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.909664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.909693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.910075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.910106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.910362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.910396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.910765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.910795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.910963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.910993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.911245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.911276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.911645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.911683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.912034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.912063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.912422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.912454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.912808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.912837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.913218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.913251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.913630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.913660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.914044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.914073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.914301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.914332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.914730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.914760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.915119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.915148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.915512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.915543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.915805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.915836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.916079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.916108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.916462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.916493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.940 [2024-11-20 07:28:26.916705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.940 [2024-11-20 07:28:26.916735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.940 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.916954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.916983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.917393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.917423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.917783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.917813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.918195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.918227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.918586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.918616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.918972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.919001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.919385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.919421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.919667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.919696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.920059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.920090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.920442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.920473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.920828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.920856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.921221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.921252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.921650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.921681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.922046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.922076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.922422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.922453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.922672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.922702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.923114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.923143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.923529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.923559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.923920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.923949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.924307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.924338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.924563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.924592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.924846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.924875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.925196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.925227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.925626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.925656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.925995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.926025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.926393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.926425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.926789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.926823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.927188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.927218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.927356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.927386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.927731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.927763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.928011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.928040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.928371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.928402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.941 [2024-11-20 07:28:26.928650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.941 [2024-11-20 07:28:26.928678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.941 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.929027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.929063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.929381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.929412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.929764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.929793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.930139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.930181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.930526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.930556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.930814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.930846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.931207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.931239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.931585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.931615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.931973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.932002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.932348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.932378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.932635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.932666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.933018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.933047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.933448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.933481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.933838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.933867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.934255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.934286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.934618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.934647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.934884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.934914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.935263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.935293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.935684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.935714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.936057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.936085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.936479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.936511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.936759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.936788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.937133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.937174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.937472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.937502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.937860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.937890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.938225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.938256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.938649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.938678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.939069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.939098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.939378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.939409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.939656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.939684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.940070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.940099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.940471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.940504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.940862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.940892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.942 qpair failed and we were unable to recover it. 00:30:04.942 [2024-11-20 07:28:26.941234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.942 [2024-11-20 07:28:26.941264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.941520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.941550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.941922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.941952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.942311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.942342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.942758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.942788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.943150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.943192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.943395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.943423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.943772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.943803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.944185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.944224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.944587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.944616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.944832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.944861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.945234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.945264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.945640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.945669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.946030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.946058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.946420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.946451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.946783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.946813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.947179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.947210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.947531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.947560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.947897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.947927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.948281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.948311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.948707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.948736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.949083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.949111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.949518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.949550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.949905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.949934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.950300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.950332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.950710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.950740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.951098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.951127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.951499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.951529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.951877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.951907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.952291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.952321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.952651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.952681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.943 [2024-11-20 07:28:26.953051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.943 [2024-11-20 07:28:26.953080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.943 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.953465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.953495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.953851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.953880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.954237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.954268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.954627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.954664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.954877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.954907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.955261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.955292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.955657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.955687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.956025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.956054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.956480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.956510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.956880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.956908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.957279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.957310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.957661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.957690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.958033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.958063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.958444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.958475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.958824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.958853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.959219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.959249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.959581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.959609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.959944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.959974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.960317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.960354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.960683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.960711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.961071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.961100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.961380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.961411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.961746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.961775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.962129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.962170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.962546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.962576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.962950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.962979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.963342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.963374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.963721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.963750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.964132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.964170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.964529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.964559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.964921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.964958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.965310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.965341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.965714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.965744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.966099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.966128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.966521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.944 [2024-11-20 07:28:26.966552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.944 qpair failed and we were unable to recover it. 00:30:04.944 [2024-11-20 07:28:26.966924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.966953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.967311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.967341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.967706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.967735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.968115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.968144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.968533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.968562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.968944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.968973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.969323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.969353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.969730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.969760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.970085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.970115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.970492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.970522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.970858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.970888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.971229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.971260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.971583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.971613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.971955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.971984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.972319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.972349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.972745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.972774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.973129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.973166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.973539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.973570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.973926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.973956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.974319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.974350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.974747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.974776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.975128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.975172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.975531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.975567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.975927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.975956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.976348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.976379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.976721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.976750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.945 [2024-11-20 07:28:26.977126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.945 [2024-11-20 07:28:26.977169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.945 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.977489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.977519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.977877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.977907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.978243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.978273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.978652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.978681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.979055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.979085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.979449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.979480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.979824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.979854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.980190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.980220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.980539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.980568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.980928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.980958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.981311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.981341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.981704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.981734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.982135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.982196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.982550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.982580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.982919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.982948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.983328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.983360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.983742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.983771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.984149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.984192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.984554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.984583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.984942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.984971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.985345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.985377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.985735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.985765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.986098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.986128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.986486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.986517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.986916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.986945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.987309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.987339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.987698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.987728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.988101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.988130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.988526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.988558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.988911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.988940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.989312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.946 [2024-11-20 07:28:26.989344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.946 qpair failed and we were unable to recover it. 00:30:04.946 [2024-11-20 07:28:26.989700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.989730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.990151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.990193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.990549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.990578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.990900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.990931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.991291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.991323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.991645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.991682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.992017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.992047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.992290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.992325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.992727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.992756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.993131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.993173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.993544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.993574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.993925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.993954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.994296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.994329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.994693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.994723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.995051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.995082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.995443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.995475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.995845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.995875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.996114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.996143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.996495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.996527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.996760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.996793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.997185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.997217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.997555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.997585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.997909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.997940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.998282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.998313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.998713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.998743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.999102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.999131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.999371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.999405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:26.999748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:26.999779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:27.000155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:27.000199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:27.000549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:27.000578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:27.000913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:27.000949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:27.001290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:27.001322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:27.001672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.947 [2024-11-20 07:28:27.001711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.947 qpair failed and we were unable to recover it. 00:30:04.947 [2024-11-20 07:28:27.002045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.002076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.002404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.002435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.002772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.002803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.003184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.003216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.003578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.003607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.003954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.003991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.004248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.004282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.004641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.004673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.005002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.005032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.005306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.005336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.005562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.005596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.005967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.005997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.006372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.006404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.006691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.006722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.007065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.007094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.007432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.007465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.007707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.007737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.008074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.008103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.008455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.008486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.008740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.008774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.009117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.009148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.009519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.009549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.009923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.009953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.010297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.010329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.010667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.010698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.011032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.011061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.011395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.011433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.011764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.011794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.012127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.012168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.948 [2024-11-20 07:28:27.012529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.948 [2024-11-20 07:28:27.012559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.948 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.012970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.013000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.013330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.013367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.013747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.013778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.014128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.014170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.014347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.014376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.014699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.014729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.015067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.015097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.015475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.015509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.015847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.015876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.016264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.016296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.016698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.016729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.017089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.017119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.017516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.017548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.017918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.017948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.018306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.018337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.018706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.018737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.019090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.019120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.019455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.019487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.019852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.019884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.020234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.020266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.020680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.020710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.021056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.021086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.021366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.021398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.021751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.021780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.022139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.022179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.022495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.022524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.022884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.022913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.023250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.023279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.023658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.023688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.024029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.024059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.024420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.024450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.024827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.949 [2024-11-20 07:28:27.024857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.949 qpair failed and we were unable to recover it. 00:30:04.949 [2024-11-20 07:28:27.025211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.025241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.025608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.025637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.026006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.026035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.026408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.026439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.026787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.026815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.027126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.027173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.027501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.027531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.027905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.027934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.028261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.028292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.028658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.028687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.029069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.029099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.029489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.029520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.029922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.029952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.030280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.030311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.030684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.030714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.031062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.031090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.031460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.031491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.031823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.031853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.032205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.032236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.032589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.032619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.032991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.033019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.033333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.033364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.033705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.033735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.034109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.034138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.034403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.034433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.034786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.034815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.035194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.035226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.035603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.950 [2024-11-20 07:28:27.035633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.950 qpair failed and we were unable to recover it. 00:30:04.950 [2024-11-20 07:28:27.035972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.036002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.036380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.036411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.036763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.036800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.037150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.037189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.037530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.037565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.037901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.037931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.038262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.038292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.038677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.038706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.039031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.039061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.039413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.039443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.039770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.039799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.040157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.040199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.040439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.040471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.040830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.040860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.041206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.041238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.041570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.041600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.041953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.041982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.042315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.042345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.042724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.042753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.043116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.043147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.043413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.043443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.043679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.043707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.044061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.044090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.044425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.044455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.044792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.044822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.045201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.045233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.045455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.045487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.045865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.045895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.046239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.046269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.046605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.046642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.046985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.047014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.047389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.047427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-11-20 07:28:27.047803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-11-20 07:28:27.047832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.048178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.048211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.048527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.048556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.048915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.048945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.049299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.049330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.049705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.049735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.050086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.050115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.050485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.050516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.050886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.050915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.051282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.051312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.051663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.051692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.052064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.052093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.052431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.052461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.052836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.052866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.053237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.053269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.053629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.053660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.054039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.054068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.054329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.054359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.054743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.054773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.055167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.055199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.055559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.055588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.055960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.055989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.056250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.056280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.056646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.056675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.057059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.057089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.057329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.057359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.057712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.057748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.058101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.058131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.058482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.058512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.058878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.058908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.059217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.059247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.059603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.059632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.059991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.060020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.060385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.060416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-11-20 07:28:27.060775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-11-20 07:28:27.060805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.060985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.061014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.061365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.061397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.061756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.061786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.062041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.062070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.062430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.062460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.062836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.062866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.063214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.063243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.063559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.063588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.063928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.063958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.064198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.064228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.064568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.064596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.064930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.064958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.065334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.065365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.065719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.065749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.066082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.066111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.066471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.066501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.066870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.066900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.067190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.067220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.067570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.067599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.067968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.067999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.068303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.068333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.068700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.068730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.069067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.069096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.069472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.069504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.069859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.069889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.070267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.070297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.070658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.070686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.071039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.071068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.071445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.071475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.071832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.071860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.072217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.072247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.072579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.072609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-11-20 07:28:27.072944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-11-20 07:28:27.072980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.073351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.073382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.073704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.073733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.074108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.074137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.074482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.074511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.074864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.074894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.075240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.075272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.075659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.075690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.076037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.076066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.076480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.076510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.076865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.076894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.077252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.077285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.077629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.077658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.078031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.078060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.078444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.078474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.078826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.078856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.079213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.079243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.079608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.079638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.079992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.080021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.080275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.080308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.080679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.080709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.081041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.081073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.081428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.081458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.081861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.081891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.082234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.082264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.082641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.082670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.083049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.083079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.083436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.083472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.083824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.083853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.084226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.084257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.084619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.084648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-11-20 07:28:27.085014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-11-20 07:28:27.085044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.085414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.085444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.085813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.085842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.086210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.086240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.086505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.086536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.086910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.086939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.087294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.087324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.087570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.087599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.087949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.087977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.088355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.088385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.088768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.088797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.089111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.089140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.089503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.089534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.089906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.089935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.090306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.090337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.090718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.090746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.091097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.091126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.091456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.091488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.091822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.091850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.092228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.092260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.092619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.092649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.093001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.093031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.093405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.093435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.093795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.093831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.094181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.094213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.094585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.094614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.094962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.094991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.095322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.095353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.095692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.095721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-11-20 07:28:27.096063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-11-20 07:28:27.096092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.096428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.096459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.096832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.096861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.097106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.097134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.097534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.097564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.097883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.097912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.098274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.098304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.098604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.098632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.098977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.099007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.099347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.099377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.099708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.099737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.100076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.100105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.100500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.100531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.100902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.100932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.101291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.101320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.101677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.101706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.102066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.102096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.102353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.102386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.102724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.102752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.103111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.103141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.103540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.103573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.103924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.103953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.104308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.104339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.104658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.104688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.105027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.105056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.105389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.105418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-11-20 07:28:27.105763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-11-20 07:28:27.105792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.106175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.106206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.106565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.106594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.106955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.106984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.107236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.107266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.107640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.107669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.107993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.108023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.108390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.108420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.108796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.108825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.109193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.109225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.109602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.109630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.109969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.109998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.110340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.110371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.110699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.110727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.111062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.111090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.111431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.111463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.111777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.111806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.112152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.112195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.112552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.112580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.112961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.112991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.113337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.113368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.113602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.113630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.113989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.114018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.114386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.114417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.114813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.114841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.115182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.115214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.115563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.115593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.115970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.116000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.116358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.116389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.116723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.116754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.117109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.117138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.117452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.117481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-11-20 07:28:27.117718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-11-20 07:28:27.117747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.118094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.118123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.118444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.118476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.118819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.118847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.119231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.119268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.119625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.119663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.119996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.120025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.120425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.120455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.120819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.120848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.121177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.121208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.121546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.121575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.121944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.121973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.122330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.122360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.122615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.122649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.123040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.123070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.123476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.123506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.123743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.123772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.124114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.124144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.124528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.124559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.124899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.124928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.125265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.125295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.125639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.125669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.126005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.126036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.126285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.126316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.126659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.126687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.127073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.127104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.127477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.127508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.127859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.127888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.128249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.128279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.128652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.128682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.129044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-11-20 07:28:27.129073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-11-20 07:28:27.129337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.129374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.129758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.129787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.130169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.130200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.130562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.130590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.130948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.130977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.131358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.131390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.131736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.131765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.132114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.132144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.132512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.132541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.132915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.132946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.133311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.133342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.133729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.133758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.134103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.134132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.134394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.134424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.134772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.134802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.135185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.135216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.135573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.135602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.135932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.135962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.136338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.136369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.136722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.136751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.137115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.137144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.137551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.137580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.137934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.137962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.138328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.138360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.138750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.138778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.139124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.139153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.139501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.139530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.139880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.139916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.140256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.140287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.140671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.140699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.141077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.141106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.141537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.141568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.141942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.141970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.142286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.142317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-11-20 07:28:27.142681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-11-20 07:28:27.142711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.143083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.143114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.143476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.143506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.143763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.143792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.144120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.144150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.144531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.144562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.144895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.144925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.145189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.145219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.145541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.145571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.145957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.145987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.146333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.146364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.146741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.146770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.147130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.147176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.147444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.147473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.147820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.147849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.148203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.148235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.148599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.148628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.148965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.148995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.149337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.149367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.149729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.149758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.150107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.150137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.150488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.150519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.150880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.150910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.151265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.151297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.151672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.151702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.151962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.151992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.152343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.152374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.152735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.152764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.153126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.153155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.153563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.153594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.153953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.153982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.154247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.154277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.154660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.154690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.154897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.154926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.155284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-11-20 07:28:27.155321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-11-20 07:28:27.155722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.155752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.156102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.156132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.156493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.156524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.156863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.156894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.157241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.157272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.157549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.157578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.157945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.157973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.158280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.158309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.158675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.158705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.159076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.159105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.159504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.159535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.159660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.159694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.159967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.159996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.160386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.160418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.160779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.160809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.161043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.161072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.161322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.161355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.161731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.161760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.162139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.162179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.162528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.162557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.162928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.162958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.163321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.163352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.163709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.163738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.163957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.163987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.164224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.164254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.164692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.164722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.165111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-11-20 07:28:27.165149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-11-20 07:28:27.165523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.165553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.165897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.165927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.166283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.166314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.166730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.166762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.167007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.167037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.167389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.167420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.167793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.167823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.168193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.168225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.168581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.168611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.168963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.168993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.169368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.169399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.169762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.169792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.170168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.170198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.170560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.170590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.170941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.170972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.171302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.171332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.171590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.171619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.171955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.171985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.172338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.172369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.172752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.172782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.173196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.173226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.173617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.173645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.173977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.174007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.174325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.174356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.174596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.174625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.174978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.175008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.175394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.175436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.175834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.175863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.176230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.176260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.176631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.176661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.176911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.176940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.177153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.177195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.177599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.177628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-11-20 07:28:27.177855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-11-20 07:28:27.177884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.178213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.178244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.178616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.178647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.179014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.179043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.179395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.179425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.179799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.179829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.180197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.180228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.180582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.180612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.180999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.181030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.181380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.181410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.181770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.181799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.182156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.182196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.182547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.182576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.182949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.182978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.183369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.183400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.183856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.183886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.184233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.184264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.184489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.184522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.184891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.184921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.185266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.185297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.185643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.185674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.185907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.185937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.186267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.186298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.186695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.186724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.186969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.186998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.187387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.187418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.187798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.187828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.188105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.188135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.188427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.188459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.188794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.188824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.189058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.189086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.189437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.189468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.189806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-11-20 07:28:27.189836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-11-20 07:28:27.190192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.190223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.190623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.190655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.191014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.191043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.191285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.191315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.191718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.191748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.191954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.191983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.192342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.192373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.192724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.192752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.192977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.193006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.193327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.193357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.193698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.193727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.194107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.194135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.194545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.194577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.194940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.194969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.195354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.195385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.195619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.195653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.195996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.196027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.196385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.196415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.196760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.196790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.197140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.197185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.197334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.197363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.197664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.197693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.198089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.198119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.198404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.198434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.198853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.198883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.199125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.199154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.199491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.199521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.199869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.199898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.200236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.200282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-11-20 07:28:27.200628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-11-20 07:28:27.200658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-11-20 07:28:27.201039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-11-20 07:28:27.201069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-11-20 07:28:27.201413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-11-20 07:28:27.201445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-11-20 07:28:27.201859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-11-20 07:28:27.201888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-11-20 07:28:27.202273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-11-20 07:28:27.202305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-11-20 07:28:27.202540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-11-20 07:28:27.202570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-11-20 07:28:27.202836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-11-20 07:28:27.202865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.203096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.203128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.203400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.203431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.203781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.203811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.204153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.204212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.204545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.204573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.204934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.204964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.205340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.205374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.205722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.205752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.206083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.206111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.206509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.206541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.206914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.206944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.207291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.207322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.207651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.207682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.208029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.208058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.208323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.208352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.208635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.208665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.208904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.208932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.209323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.209355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.209687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.209715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.210075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.210111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.210492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.210523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.210884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.210912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.211202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.211233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.211511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.211540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.211867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.211895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.212156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.212208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.212432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.212461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.212864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.212893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.213128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.213157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.213535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.213564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.213913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.213952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.214296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.214328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.214679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.214708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.214959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.214989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.215405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.215436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.215798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.215826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-11-20 07:28:27.216042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-11-20 07:28:27.216071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.216329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.216359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.216700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.216729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.217103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.217134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.217505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.217535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.217663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.217695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.218055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.218085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.218427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.218459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.218780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.218810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.219169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.219200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.219649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.219678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.220033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.220062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.220421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.220451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.220803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.220833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.221199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.221230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.221479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.221511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.221838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.221867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.222243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.222275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.222629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.222659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.223012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.223042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.223294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.223325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.223666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.223697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.224059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.224089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.224423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.224456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.224793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.224823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.225179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.225211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.225562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.225592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.225953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.225985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.226356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.226388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.226753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-11-20 07:28:27.226784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-11-20 07:28:27.227136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.227174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.227444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.227477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.227827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.227857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.228214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.228245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.228612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.228641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.229022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.229051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.229349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.229379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.229752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.229782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.230023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.230053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.230422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.230453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.230777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.230807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.231148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.231187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.231570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.231600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.231863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.231892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.232063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.232091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.232411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.232441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.232804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.232833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.233177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.233209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.233607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.233636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.233989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.234018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.234363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.234394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.234782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.234818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.235183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.235213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.235589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.235618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.235970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.235999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.236348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.236379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.236771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.236801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.237152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.237192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.237530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.237560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.237931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.237962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.238320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.238351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.238688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.238718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.239065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.239094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.239348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.239381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.239736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.239766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.240129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.240167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.240557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-11-20 07:28:27.240586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-11-20 07:28:27.240920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.240949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.241301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.241330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.241672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.241701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.242009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.242037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.242415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.242446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.242778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.242808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.243193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.243223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.243582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.243619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.243961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.243991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.244321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.244352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.244675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.244711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.245042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.245077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.245373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.245403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.245763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.245793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.246149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.246192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.246557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.246586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.246916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.246946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.247179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.247211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.247560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.247594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.247929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.247958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.248298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.248331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.248571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.248600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.248945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.248980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.249212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.249246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.249588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.249617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.249995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.250024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.250427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.250459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.250588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.250618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-11-20 07:28:27.250990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-11-20 07:28:27.251018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.251386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.251417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.251612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.251641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.251996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.252026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.252392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.252423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.252769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.252798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.253089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.253119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.253423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.253455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.253846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.253877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.254238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.254270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.254631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.254667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.255014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.255045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.255410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.255443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.255759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.255790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.256146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.256188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.256562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.256594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.256984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.257015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.257384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.257416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.257655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.257685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.258058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.258088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.258459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.258490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.258727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.258757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.259105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.259135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.259523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.259554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.259909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.259939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.260296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.260328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.260700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.260731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.261084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.261113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.261478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.261510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.261864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.261894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.262234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.262266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.262624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.262654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.263004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.263035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.263408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.263439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.263799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.263829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.264182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.264214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.264577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.264607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.264953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-11-20 07:28:27.264983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-11-20 07:28:27.265340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.265372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.265693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.265724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.266081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.266111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.266501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.266533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.266757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.266788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.267138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.267182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.267569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.267598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.267945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.267975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.268345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.268377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.268733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.268763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.269006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.269038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.269409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.269440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.269681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.269712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.270059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.270090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.270451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.270482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.270855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.270885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.271279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.271309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.271669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.271699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.272014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.272045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.272403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.272435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.272832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.272862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.273259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.273309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.273543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.273575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.273929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.273959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.274310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.274341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.274698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.274729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.275083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.275114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.275419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-11-20 07:28:27.275451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-11-20 07:28:27.275794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.275825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.276202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.276234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.276484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.276514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.276875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.276905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.277278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.277310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.277685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.277717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.278068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.278097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.278466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.278496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.278861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.278891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.279238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.279268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.279519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.279548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.279903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.279933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.280301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.280338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.280681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.280710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.281087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.281116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.281463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.281496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.281894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.281923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.282284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.282317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.282670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.282699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.282918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.282949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.283320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.283351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.283738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.283769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.284124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.284156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.284503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.284535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.284899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.284930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.285276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.285307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.285654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.285683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.286055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.286085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.286503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.286534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.286895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.286926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.287278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.287309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.287681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.287710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.287945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.287977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.288336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.288367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.288717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.288749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.289126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.289156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.289525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.289554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.289925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.289955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-11-20 07:28:27.290195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-11-20 07:28:27.290226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.290593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.290630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.290982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.291013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.291347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.291378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.291727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.291758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.292103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.292133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.292563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.292593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.292949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.292978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.293319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.293351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.293718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.293748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.294107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.294136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.294539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.294572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.294916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.294947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.295283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.295314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.295589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.295619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.295960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.295990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.296337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.296367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.296743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.296772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.297102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.297132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.297495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.297525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.297904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.297934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.298284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.298313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.298682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.298712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.299085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.299114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.299507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.299540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.299879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.299909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.300143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.300189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.300520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.300550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.300890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.300920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.301275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.301307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.301670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.301701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.302038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.302067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.302327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.302360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.302632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.302663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.303009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.303038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.303407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.303438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.303659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.303691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.304051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.304082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-11-20 07:28:27.304423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-11-20 07:28:27.304454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.304800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.304830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.305178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.305210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.305565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.305595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.305963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.305994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.306342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.306373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.306627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.306656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.306999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.307027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.307297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.307327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.307688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.307718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.308049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.308078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.308464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.308495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.308854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.308885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.309250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.309281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.309655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.309685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.310062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.310093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.310718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.310749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.311089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.311117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.311494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.311525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.311832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.311861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.312205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.312236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.312498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.312527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.312849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.312879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.313235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.313265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.313632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.313661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.314031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.314061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.314430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.314460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.314826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.314856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.315215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.315245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.315591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.315621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.315970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.315999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.316346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.316382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.316729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.316758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.317103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.317132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.317497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.317529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.317918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.317946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.318291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.318322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.318670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.318700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-11-20 07:28:27.319030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-11-20 07:28:27.319058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.319417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.319447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.319817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.319846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.320203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.320233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.320594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.320623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.320996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.321025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.321389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.321419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.321773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.321803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.322213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.322243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.322473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.322502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.322880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.322908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.323268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.323299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.323663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.323692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.323947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.323979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.324319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.324350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.324713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.324743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.325106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.325136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.325381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.325415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.325768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.325798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.326150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.326194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.326540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.326582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.326899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.326929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.327266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.327297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.327555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.327583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.327927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.327956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.328294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.328326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.328582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.328613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.328951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.328980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.329342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.329373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.329714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.329743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.330121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.330150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.330556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.330586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.330944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.330972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-11-20 07:28:27.331320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-11-20 07:28:27.331351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.331721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.331750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.332107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.332136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.332485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.332515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.332885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.332915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.333290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.333320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.333727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.333757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.334054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.334082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.334455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.334486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.334865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.334895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.335132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.335175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.335543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.335573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.335917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.335945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.336313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.336343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.336719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.336754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.337118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.337147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.337458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.337488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.337829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.337859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.338227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.338258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.338623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.338660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.338990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.339019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.339382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.339413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.339776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.339806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.340180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.340211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.340457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.340490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.340846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.340878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.341215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.341246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.341612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.341641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.342019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.342049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.342386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.342418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.342776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.342806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.343187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.343218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.343574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.343602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.343967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.343996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.344270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.344299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.344658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.344687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.345066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.345095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.345460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.345491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.253 [2024-11-20 07:28:27.345839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-20 07:28:27.345868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.253 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.346198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.346227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.346555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.346585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.346957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.346988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.347338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.347368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.347726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.347764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.348114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.348143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.348524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.348554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.348908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.348936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.349295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.349326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.349708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.349736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.350090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.350118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.350505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.350535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.350917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.350947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.351326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.351356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.351724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.351753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.352117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.352145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.352526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.352556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.352959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.352987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.353349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.353380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.353739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.353768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.354143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.354180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.354552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.354582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.354946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.354976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.355314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.355346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.355677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.355706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.356039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.356068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.356422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.356452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.356749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.356777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.357128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.357168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.357495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.357524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.357897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.357926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.358251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.358281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.358545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.358574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.358923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.358960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.359315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.359345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.359704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.359734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.359964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.359998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.254 [2024-11-20 07:28:27.360301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.254 [2024-11-20 07:28:27.360330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.254 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.360656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.360685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.361064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.361095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.361432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.361463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.361809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.361840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.362180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.362211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.362581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.362617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.362989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.363018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.363391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.363423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.363788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.363817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.364192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.364222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.364585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.364614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.364984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.365013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.365342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.365372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.365744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.365774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.366134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.366185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.366561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.366591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.366928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.366956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.367291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.367322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.367703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.367734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.368002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.368031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.368406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.368437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.368776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.368804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.369049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.369082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.369326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.369356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.369720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.369749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.370125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.370154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.370618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.370648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.371017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.371047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.371402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.371433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.371808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.371838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.372184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.372214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.372559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.372588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.372958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.372994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.373344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.373374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.373743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.373772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.374023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.374052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.374387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.374417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.374786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.255 [2024-11-20 07:28:27.374815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.255 qpair failed and we were unable to recover it. 00:30:05.255 [2024-11-20 07:28:27.375180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.375210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.375495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.375524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.375888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.375918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.376178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.376208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.376557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.376586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.376968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.376999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.377376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.377407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.377768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.377803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.378188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.378219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.378599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.378629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.378908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.378937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.379281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.379311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.379700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.379730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.380100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.380129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.380536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.380566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.380914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.380943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.381295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.381326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.381667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.381695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.382065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.382094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.382468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.382498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.382873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.382901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.383268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.383298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.383670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.383699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.384058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.384087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.384451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.384482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.384812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.384842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.385255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.385286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.385627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.385656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.386010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.386040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.386397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.386429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.386778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.386807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.387183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.387215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.387456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.387487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.387813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.387842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.388215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.388247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.388620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.388650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.389052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.389080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.389436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.256 [2024-11-20 07:28:27.389466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.256 qpair failed and we were unable to recover it. 00:30:05.256 [2024-11-20 07:28:27.389830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.389859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.390196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.390225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.390554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.390584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.390924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.390953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.391437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.391468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.391810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.391840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.392221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.392251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.392508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.392537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.392917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.392947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.393311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.393343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.393697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.393726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.394067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.394098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.394422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.394452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.394835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.394864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.395233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.395263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.395624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.395653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.396008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.396039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.396384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.396414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.396796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.396825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.397246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.397277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.397654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.397683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.397957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.397985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.398339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.398369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.398750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.398780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.399150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.399229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.399577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.399606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.399889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.399918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.400289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.400318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.400702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.400731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.401087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.401118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.401472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.401502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.401772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.401801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.402175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.257 [2024-11-20 07:28:27.402206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.257 qpair failed and we were unable to recover it. 00:30:05.257 [2024-11-20 07:28:27.402542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.402570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.402792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.402822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.403156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.403209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.403566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.403595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.403976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.404006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.404255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.404285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.404547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.404584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.404926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.404955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.405372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.405402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.405766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.405795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.406179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.406211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.406583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.406612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.406988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.407018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.407389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.407420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.407762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.407791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.408191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.408223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.408583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.408612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.408985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.409014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.409420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.409457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.409805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.409833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.410217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.410247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.410652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.410681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.410912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.410940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.411315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.411346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.411706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.411736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.412106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.412135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.412477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.412509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.412868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.412896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.413284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.413315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.413719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.413749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.414082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.414110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.414454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.414485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.414845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.414875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.415270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.415301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.415623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.415652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.415910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.415940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.416289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.416320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.416675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.416704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.258 qpair failed and we were unable to recover it. 00:30:05.258 [2024-11-20 07:28:27.417055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.258 [2024-11-20 07:28:27.417085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.417424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.417455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.417797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.417826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.418201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.418233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.418612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.418641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.419023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.419053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.419440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.419470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.419848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.419883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.420254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.420286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.420629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.420658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.420965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.420994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.421335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.421366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.421600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.421628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.421983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.422013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.422296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.422328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.422725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.422754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.423132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.423180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.423562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.423593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.423935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.423966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.424337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.424368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.424734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.424765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.425021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.425051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.425397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.425429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.425768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.425798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.426046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.426076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.426314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.426348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.426606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.426635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.426981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.427011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.427393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.427423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.427752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.427781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.428148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.428189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.428559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.428589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.428961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.428990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.429390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.429422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.429828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.429858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.430078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.430107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.430461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.430492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.430880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.430909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.259 qpair failed and we were unable to recover it. 00:30:05.259 [2024-11-20 07:28:27.431184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.259 [2024-11-20 07:28:27.431214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.431444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.431473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.431834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.431863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.432217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.432247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.432598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.432628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.433003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.433032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.433282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.433312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.433652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.433682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.434053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.434081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.434299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.434329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.434602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.434632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.434982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.435011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.435342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.435372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.435779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.435809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.436184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.436214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.436580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.436609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.436959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.436989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.437381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.437412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.437623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.437652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.438017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.438046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.438289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.438321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.438704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.438733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.438990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.439018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.439417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.439448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.439863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.439892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.440139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.440179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.440483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.440513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.440871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.440899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.441278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.441309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.441660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.441690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.441917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.441946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.442209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.442242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.442638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.442668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.442993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.443022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.443388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.443420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.443676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.443706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.444050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.444080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.444443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.444481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.260 [2024-11-20 07:28:27.444827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.260 [2024-11-20 07:28:27.444857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.260 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.445206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.445237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.445508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.445538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.445900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.445929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.446286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.446317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.446707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.446736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.447122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.447152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.447605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.447634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.447991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.448020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.448243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.448273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.448620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.448649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.448997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.449026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.449390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.449422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.449761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.449792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.450168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.450200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.450563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.450592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.450810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.450840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.451184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.451214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.451394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.451424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.451814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.451844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.452190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.452221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.452581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.452612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.452961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.452991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.453322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.453352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.453711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.453740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.454097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.454126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.454447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.454485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.454821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.454851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.455182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.455213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.455476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.455504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.455840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.455870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.456226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.456257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.456608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.456638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.457008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.457037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.457380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.457411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.457767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.457795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.458152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.458192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.458555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.458585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.458942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.261 [2024-11-20 07:28:27.458971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.261 qpair failed and we were unable to recover it. 00:30:05.261 [2024-11-20 07:28:27.459329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.459359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.459748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.459778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.460123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.460151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.460521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.460552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.460913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.460943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.461303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.461335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.461686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.461715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.462096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.462127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.462509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.462540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.462870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.462901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.463266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.463297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.463682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.463711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.464059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.464087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.464420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.464450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.464841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.464870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.465218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.465248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.465621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.465651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.465965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.465993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.466378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.466408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.466853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.466883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.467247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.467278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.467615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.467644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.468032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.468061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.468412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.468441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.468785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.468814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.469060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.469088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.469439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.469471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.469857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.469888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.470252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.470284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.470608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.470642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.471025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.471055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.471385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.471416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.471790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.471819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.262 qpair failed and we were unable to recover it. 00:30:05.262 [2024-11-20 07:28:27.472179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.262 [2024-11-20 07:28:27.472211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.472578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.472611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.472963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.472993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.473343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.473374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.473710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.473743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.474074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.474104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.474480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.474511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.474866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.474895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.475253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.475283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.475652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.475683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.476042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.476071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.476412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.476444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.476820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.476851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.477223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.477254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.477620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.477650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.478016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.478045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.478422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.478452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.478796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.478826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.479178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.479216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.479564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.479593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.479966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.479995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.480344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.480374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.480716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.480758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.481098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.481127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.481566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.481598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.481970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.481999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.482302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.482333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.482671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.482700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.483059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.483089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.483425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.483462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.483801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.483830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.484173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.484205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.484533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.484562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.484898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.484935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.485294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.485326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.485582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.485612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.485957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.485988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.486341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.486372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.263 [2024-11-20 07:28:27.486709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.263 [2024-11-20 07:28:27.486738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.263 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.487076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.487106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.487437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.487469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.487834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.487863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.488219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.488249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.488576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.488605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.488982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.489012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.489335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.489365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.489611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.489641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.489994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.490023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.490428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.490458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.490824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.490859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.491200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.491231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.491567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.491595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.491930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.491960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.492297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.492328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.492698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.492727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.493094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.493123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.493522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.493553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.493900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.493928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.494287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.494318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.494698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.494727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.495089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.495117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.495487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.495518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.495892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.495922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.496276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.496307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.496688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.496717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.497070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.497099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.497471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.497501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.497870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.497899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.498265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.498297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.498675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.498704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.498963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.498995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.499344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.499374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.499751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.499781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.500123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.500152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.500564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.500594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.264 [2024-11-20 07:28:27.500946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.264 [2024-11-20 07:28:27.500974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.264 qpair failed and we were unable to recover it. 00:30:05.538 [2024-11-20 07:28:27.501332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.538 [2024-11-20 07:28:27.501372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.538 qpair failed and we were unable to recover it. 00:30:05.538 [2024-11-20 07:28:27.501696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.538 [2024-11-20 07:28:27.501734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.538 qpair failed and we were unable to recover it. 00:30:05.538 [2024-11-20 07:28:27.501978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.538 [2024-11-20 07:28:27.502006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.538 qpair failed and we were unable to recover it. 00:30:05.538 [2024-11-20 07:28:27.502349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.538 [2024-11-20 07:28:27.502379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.538 qpair failed and we were unable to recover it. 00:30:05.538 [2024-11-20 07:28:27.502727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.538 [2024-11-20 07:28:27.502757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.538 qpair failed and we were unable to recover it. 00:30:05.538 [2024-11-20 07:28:27.503086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.538 [2024-11-20 07:28:27.503114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.538 qpair failed and we were unable to recover it. 00:30:05.538 [2024-11-20 07:28:27.503454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.538 [2024-11-20 07:28:27.503484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.538 qpair failed and we were unable to recover it. 00:30:05.538 [2024-11-20 07:28:27.503716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.538 [2024-11-20 07:28:27.503748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.504086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.504116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.504538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.504568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.504916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.504945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.505307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.505337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.505573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.505604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.505956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.505986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.506341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.506374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.506742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.506772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.507098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.507128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.507520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.507551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.507959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.507988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.508360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.508390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.508746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.508776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.509124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.509153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.509514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.509545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.509875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.509904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.510315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.510346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.510707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.510735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.511106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.511135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.511523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.511554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.511907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.511937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.512298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.512330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.512692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.512723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.513088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.513117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.513450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.513480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.513817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.513845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.514066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.514098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.514456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.514486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.514823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.514863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.515200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.515230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.515571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.515607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.539 [2024-11-20 07:28:27.515955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.539 [2024-11-20 07:28:27.515984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.539 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.516315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.516345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.516708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.516739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.517088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.517118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.517443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.517474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.517793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.517824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.518175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.518206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.518457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.518486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.518873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.518912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.519244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.519275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.519610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.519640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.520011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.520040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.520394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.520426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.520779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.520809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.521183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.521214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.521507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.521536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.521897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.521927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.522290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.522320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.522673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.522703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.523049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.523077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.523448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.523480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.523844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.523873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.524221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.524250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.524502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.524531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.524884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.524912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.525280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.525309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.525649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.525679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.526031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.526061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.526413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.526444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.526781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.526817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.527181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.527211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.527587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.540 [2024-11-20 07:28:27.527616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.540 qpair failed and we were unable to recover it. 00:30:05.540 [2024-11-20 07:28:27.527952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.527983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.528215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.528249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.528470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.528501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.528851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.528880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.529268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.529300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.529670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.529699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.530066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.530096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.530441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.530472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.530822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.530851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.531231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.531263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.531610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.531639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.532011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.532040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.532408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.532439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.532800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.532830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.533168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.533201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.533553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.533581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.533914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.533945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.534283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.534314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.534695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.534724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.535081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.535110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.535470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.535500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.535781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.535810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.536178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.536210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.536563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.536591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.536928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.536963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.537338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.537370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.537739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.537768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.538112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.538141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.538492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.538523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-11-20 07:28:27.538892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-11-20 07:28:27.538922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.539285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.539316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.539695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.539726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.540079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.540108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.540508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.540538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.540886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.540915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.541267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.541298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.541660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.541690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.542060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.542089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.542504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.542535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.542892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.542920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.543307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.543337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.543706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.543737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.544084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.544114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.544456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.544487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.544857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.544887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.545240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.545270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.545627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.545656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.545986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.546015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.546397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.546426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.546776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.546805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.547150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.547198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.547549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.547580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.547953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.547982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.548303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.548334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.548682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.548711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.549082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.549112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.549480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.549510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.549856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.549886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.550256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.550288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-11-20 07:28:27.550641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-11-20 07:28:27.550670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.551027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.551056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.551425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.551455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.551715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.551744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.552027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.552056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.552422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.552452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.552611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.552643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.552998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.553028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.553352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.553383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.553747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.553777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.554146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.554192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.554523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.554555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.554902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.554930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.555200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.555235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.555599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.555629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.555995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.556024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.556482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.556513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.556755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.556784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.557143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.557183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.557529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.557558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.557937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.557967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.558300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.558329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.558710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.558740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.559108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.559137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.559528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.559558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.559796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.559829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.560196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.560228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.560582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.560611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.560946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.560976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.561357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.561387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.561756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.561786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.562145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.562185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.562414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-11-20 07:28:27.562443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-11-20 07:28:27.562801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.562837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.563188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.563220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.563579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.563611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.563949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.563978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.564339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.564370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.564730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.564760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.565112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.565141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.565461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.565491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.565820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.565849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.566225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.566256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.566661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.566689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.567043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.567073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.567465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.567496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.567848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.567885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.568258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.568289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.568675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.568704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.569077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.569105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.569483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.569514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.569866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.569895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.570262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.570293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.570627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.570657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.570987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.571016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.571335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.571365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.571719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.571749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.572087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.572116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.572482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-11-20 07:28:27.572512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-11-20 07:28:27.572840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.572870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.573214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.573251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.573582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.573610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.573953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.573984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.574314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.574344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.574719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.574749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.575100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.575129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.575436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.575466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.575863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.575892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.576127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.576169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.576537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.576566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.576960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.576989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.577339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.577369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.577722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.577751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.578126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.578155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.578524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.578553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.578917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.578946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.579280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.579310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.579673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.579701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.580030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.580061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.580416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.580447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.580702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.580732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.581118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.581148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.581556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.581587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.581913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.581943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.582287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.582317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.582559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.582593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.582977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.583007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.583331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.583368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.583701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.583731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.584106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.584135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.584391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.584421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.584765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.584793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.585097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.585125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-11-20 07:28:27.585496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-11-20 07:28:27.585527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.585868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.585897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.586275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.586306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.586647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.586675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.587034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.587062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.587402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.587432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.587772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.587800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.588136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.588183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.588559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.588590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.588830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.588858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.589241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.589273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.589525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.589556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.589894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.589923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.590243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.590273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.590647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.590677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.591043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.591072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.591428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.591458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.591802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.591832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.592253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.592282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.592646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.592675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.593013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.593042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.593396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.593426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.593800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.593830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.594174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.594207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.594552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.594581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.594835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.594867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.595207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.595238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.595605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.595634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.595985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.596013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.596402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.596433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.596800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.596831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.597198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.597229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.597492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.597524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-11-20 07:28:27.597881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-11-20 07:28:27.597910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.598253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.598282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.598657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.598687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.599069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.599098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.599347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.599377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.599742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.599771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.600116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.600144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.600534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.600565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.600821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.600849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.601203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.601234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.601586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.601617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.601975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.602004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.602351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.602382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.602754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.602783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.603154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.603198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.603548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.603577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.603935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.603965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.604340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.604372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.604728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.604757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.605131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.605170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.605559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.605590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.605955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.605985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.606336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.606366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.606685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.606716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.607069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.607098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.607512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.607542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.607891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.607920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.608295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.608326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.608573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.608602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.608963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.608998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.609259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.609292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.609664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.609694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-11-20 07:28:27.610022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-11-20 07:28:27.610053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.610293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.610326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.610562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.610590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.610943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.610972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.611327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.611358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.611744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.611774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.612126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.612154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.612447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.612476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.612830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.612859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.613194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.613224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.613609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.613639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.614012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.614042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.614424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.614454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.614818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.614847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.615223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.615254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.615693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.615722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.616078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.616108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.616462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.616493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.616867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.616898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.617244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.617276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.617631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.617660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.617871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.617900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.618277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.618309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.618671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.618700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.619075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.619110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.619483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.619513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.619864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.619893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.620241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.620272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.620612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.620648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.620993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.621021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.621330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.621361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.621714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.621743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.622077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.622107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.622467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-11-20 07:28:27.622497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-11-20 07:28:27.622870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.622898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.623273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.623303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.623655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.623684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.624043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.624072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.624427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.624457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.624834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.624863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.625243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.625275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.625605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.625634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.626007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.626036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.626384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.626416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.626757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.626787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.627146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.627185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.627540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.627569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.627901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.627932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.628314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.628346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3709334 Killed "${NVMF_APP[@]}" "$@" 00:30:05.549 [2024-11-20 07:28:27.628582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.628613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.628980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.629009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.629382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 07:28:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:05.549 [2024-11-20 07:28:27.629416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.629769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 07:28:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:05.549 [2024-11-20 07:28:27.629799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 07:28:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:05.549 [2024-11-20 07:28:27.630199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.630232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 07:28:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:05.549 [2024-11-20 07:28:27.630502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.630533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 07:28:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.549 [2024-11-20 07:28:27.630819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.630849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.631198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.631229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.631618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-11-20 07:28:27.631647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-11-20 07:28:27.632023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.632054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.632376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.632406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.632770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.632800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.633137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.633185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.633583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.633625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.633968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.634000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.634374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.634406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.634768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.634800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.635135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.635177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.635579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.635610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.635989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.636018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.636355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.636386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.636788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.636817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.637181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.637212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.637524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.637554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.637923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.637952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.638299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.638329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 07:28:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3710358 00:30:05.550 [2024-11-20 07:28:27.638702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.638734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 07:28:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3710358 00:30:05.550 [2024-11-20 07:28:27.639122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.639156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 07:28:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:05.550 07:28:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3710358 ']' 00:30:05.550 [2024-11-20 07:28:27.639553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.639588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 07:28:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.550 07:28:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:05.550 [2024-11-20 07:28:27.639981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.640013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 07:28:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.550 [2024-11-20 07:28:27.640275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.640307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 07:28:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:05.550 [2024-11-20 07:28:27.640551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 07:28:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.550 [2024-11-20 07:28:27.640588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.640932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.640964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.641302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.641336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.641682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.641714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.642057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.642087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-11-20 07:28:27.642378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-11-20 07:28:27.642410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.642810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.642847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.643090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.643120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.643516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.643549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.643914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.643945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.644532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.644569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.644910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.644949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.645267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.645301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.645571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.645603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.645767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.645798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.646060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.646090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.646533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.646565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.646912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.646943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.647291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.647329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.647560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.647595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.647836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.647867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.648235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.648267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.648648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.648682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.648912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.648943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.649306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.649340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.649688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.649721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.650098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.650129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.650406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.650438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.650837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.650867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.651133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.651177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.651543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.651573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.651922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.651994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.652346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.652379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.652757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.652787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.653067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.653096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.653332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.653365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.653655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.653685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.654064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-11-20 07:28:27.654093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-11-20 07:28:27.654459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.654489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.654863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.654896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.655263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.655293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.655653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.655681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.655940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.655969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.656206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.656238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.656614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.656643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.657021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.657057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.657436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.657468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.657822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.657856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.658115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.658145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.658556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.658587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.658746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.658779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.659178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.659209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.659551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.659582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.659917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.659946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.660210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.660242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.660481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.660512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.660858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.660887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.661220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.661251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.661590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.661619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.661997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.662027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.662275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.662306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.662707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.662738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.662985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.663015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.663394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.663425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.663663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.663692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.664072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.664101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.664339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.664370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.664685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.664714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.664973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.665001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.665357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.665389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.665741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.665770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.666096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-11-20 07:28:27.666126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-11-20 07:28:27.666523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.666554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.666934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.666963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.667327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.667359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.667575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.667604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.667939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.667969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.668341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.668373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.668746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.668775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.669017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.669048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.669398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.669429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.669678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.669707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.669949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.669980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.670213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.670244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.670582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.670611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.670976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.671006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.671381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.671413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.671776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.671805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.672194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.672226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.672577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.672606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.673000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.673029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.673363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.673398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.673790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.673819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.674186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.674224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.674605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.674637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.674988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.675020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.675282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.675313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.675706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.675736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.676093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.676123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.676397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.676426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.676559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.676587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.676932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.676961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.677360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.677392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.677750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.677780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.678129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.678169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.678433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-11-20 07:28:27.678463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-11-20 07:28:27.678850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.678880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.679290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.679320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.679678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.679708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.680054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.680085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.680455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.680485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.680923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.680952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.681320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.681351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.681569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.681604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.681968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.681997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.682423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.682454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.682678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.682706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.682842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.682873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.683251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.683282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.683737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.683766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.683992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.684021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.684348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.684378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.684747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.684775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.685155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.685199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.685446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.685474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.685839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.685868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.686112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.686141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.686594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.686624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.687069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.687098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.687442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.687473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.687853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.687882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.688282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.688313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.688693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.688722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.689090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.689119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.689534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.689566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.689856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.689885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-11-20 07:28:27.690266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-11-20 07:28:27.690296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.690663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.690692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.691053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.691082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.691470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.691501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.691851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.691887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.692241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.692271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.692655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.692684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.693063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.693091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.693319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.693349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.693742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.693771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.694146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.694189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.694313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.694345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.694584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.694615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.694983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.695013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.695237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.695267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.695567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.695597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.695823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.695852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.696239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.696270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.696708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.696737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.696955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.696984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.697425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.697456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.697799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.697828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.698190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.698221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.698525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.698555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.698813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.698843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.699217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.699248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.699636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-11-20 07:28:27.699665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-11-20 07:28:27.700054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.700084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.700449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.700480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.700856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.700887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.701249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.701280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.701645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.701685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.702022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.702050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.702279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.702309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.702686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.702716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.702928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.702957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.703339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.703370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.703732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.703763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.704130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.704188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.704550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.704580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.704941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.704970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.705253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.705282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.705534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.705563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.705926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.705954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.706194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.706224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.706588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.706619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.706959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.706989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.707323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.707355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.707717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.707749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.708104] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:30:05.556 [2024-11-20 07:28:27.708117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.708169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 [2024-11-20 07:28:27.708187] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.708550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.708581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.708927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.708957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.709302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.709334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.709703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.709733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.710097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.710127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.710514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.710546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.710910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.710940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.711299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.711338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-11-20 07:28:27.711689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-11-20 07:28:27.711721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.712077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.712110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.712545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.712581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.712823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.712854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.713212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.713246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.713610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.713640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.714004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.714034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.714422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.714453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.714809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.714839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.715195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.715231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.715594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.715625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.716021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.716052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.716293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.716326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.716584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.716615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.716981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.717012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.717341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.717373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.717764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.717794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.718146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.718193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.718464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.718498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.718872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.718902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.719231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.719264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.719627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.719661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.719969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.719998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.720333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.720366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.720743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.720773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.721145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.721198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.721560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.721598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.721954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.721985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.722335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.722366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.722738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.722769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.723139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.723185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-11-20 07:28:27.723551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-11-20 07:28:27.723581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.723851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.723882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.724213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.724245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.724634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.724665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.724991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.725028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.725380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.725411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.725743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.725774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.726017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.726047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.726389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.726420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.726789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.726821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.727182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.727215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.727597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.727627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.727948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.727980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.728214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.728245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.728650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.728679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.729043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.729072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.729460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.729492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.729848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.729879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.730222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.730256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.730548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.730578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.730936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.730967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.731322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.731352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.731708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.731738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.732016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.732046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.732387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.732419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.732788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.732817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.733191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.733222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.733571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.733601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.733957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.733988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.734347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.734378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.734636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.734665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-11-20 07:28:27.735016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-11-20 07:28:27.735046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.735382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.735412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.735745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.735775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.736107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.736141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.736551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.736582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.736929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.736959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.737335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.737367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.737611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.737641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.738009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.738037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.738266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.738297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.738663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.738693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.739049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.739077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.739413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.739443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.739781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.739812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.740027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.740058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.740451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.740481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.740863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.740892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.741222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.741254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.741591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.741620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.741998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.742028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.742456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.742486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.742819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.742848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.743205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.743238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.743621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.743651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.744025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.744054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.744357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.744389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.744755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.744784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.745133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.745189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.745511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.745542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.745858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.745888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.746235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.746269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.746624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.746653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.747004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-11-20 07:28:27.747041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-11-20 07:28:27.747451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.747481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.747828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.747859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.748215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.748245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.748588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.748618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.748998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.749027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.749441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.749473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.749856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.749885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.750235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.750270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.750624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.750658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.751045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.751075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.751461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.751493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.751767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.751798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.752175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.752208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.752543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.752574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.752808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.752837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.753177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.753209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.753548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.753578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.753938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.753967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.754213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.754244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.754641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.754670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.754976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.755007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.755290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.755322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.755689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.755718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.755939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-11-20 07:28:27.755973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-11-20 07:28:27.756339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.756371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.756746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.756776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.757152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.757201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.757580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.757612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.757964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.757994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.758338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.758369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.758645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.758675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.759068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.759098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.759467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.759499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.759857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.759888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.760255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.760286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.760518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.760547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.760934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.760964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.761304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.761335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.761722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.761753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.762108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.762137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.762516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.762548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.762904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.762933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.763274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.763307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.763670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.763700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.764058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.764089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.764455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.764488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.764863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.764892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.765236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.765268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.765648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.765678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.765910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.765946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.766311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.766344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.766690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.766720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.766948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.766980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.767367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.767400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-11-20 07:28:27.767728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-11-20 07:28:27.767760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.768093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.768124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.768415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.768447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.768801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.768832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.769192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.769225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.769619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.769650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.770013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.770043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.770199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.770231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.770631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.770660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.770919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.770951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.771302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.771333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.771681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.771711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.772023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.772054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.772286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.772317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.772656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.772686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.773084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.773113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.773505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.773536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.773898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.773928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.774292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.774324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.774660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.774691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.775066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.775096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.775466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.775498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.775757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.775791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.776134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.776178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.776572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.776602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.776815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.776844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.777190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.777222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.777611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.777640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.777999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.778030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.778380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.778411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.778817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.778847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.779190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.562 [2024-11-20 07:28:27.779222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.562 qpair failed and we were unable to recover it. 00:30:05.562 [2024-11-20 07:28:27.779528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.779559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.779916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.779946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.780223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.780254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.780616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.780645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.781002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.781032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.781395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.781426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.781779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.781809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.782180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.782212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.782566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.782608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.782990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.783021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.783248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.783282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.783519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.783553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.783931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.783960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.784305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.784337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.784683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.784713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.785084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.785113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.785476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.785508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.785839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.785868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.786240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.786271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.786661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.786690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.786999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.787028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.787387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.787419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.787663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.787693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.788058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.788087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.788374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.788408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.788736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.788766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.789174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.789206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.789560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.789592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.789819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.789848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.790199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.790230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.790600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.790630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.791028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.791058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.791379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.563 [2024-11-20 07:28:27.791410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.563 qpair failed and we were unable to recover it. 00:30:05.563 [2024-11-20 07:28:27.791778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.791809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.792169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.792202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.792458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.792494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.792841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.792871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.793237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.793269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.793502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.793530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.793894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.793923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.794323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.794355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.794708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.794737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.795078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.795107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.795467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.795498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.795830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.795861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.796207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.796238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.796589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.796617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.796961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.796991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.797338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.797372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.797708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.797738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.797924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.797954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.798340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.798371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.798724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.798754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.799122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.799151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.799513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.799544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.799891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.799920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.564 [2024-11-20 07:28:27.800255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.564 [2024-11-20 07:28:27.800286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.564 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.800652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.800683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.801056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.801087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.801290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.801323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.801692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.801724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.802067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.802098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.802447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.802484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.802798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.802828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.803171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.803203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.803545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.803576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.803962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.803992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.804219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.804250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.804638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.804667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.805011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.805041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.805398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.805428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.805691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.805719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.806085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-11-20 07:28:27.806114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-11-20 07:28:27.806480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.806512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.806951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.806982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.807342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.807372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.807724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.807754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.808118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.808145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.808427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.808456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.808836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.808865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.809191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.809222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.809594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.809623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.809986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.810015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.810396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.810427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.810685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.842 [2024-11-20 07:28:27.810751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.810781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.811151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.811192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.811548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.811577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.811942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.811972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.812225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.812257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.812646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.812681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.813020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.813050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.813447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.813478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.813706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.813735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.814096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.814125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.814485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.814516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.814868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.814898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.815150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.815191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.815530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.815559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.815913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.815942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.816282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.816313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.816608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.816638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.817012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.817041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.817404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.817437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.817815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.817845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.818212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.818243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.818492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.818522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.818861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.818890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.819259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.819290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.819650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.819680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-11-20 07:28:27.819917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-11-20 07:28:27.819949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.820311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.820343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.820593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.820622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.820949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.820978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.821346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.821378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.821782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.821812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.822156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.822199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.822528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.822563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.822929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.822958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.823314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.823345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.823691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.823720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.823968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.823998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.824342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.824373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.824737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.824766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.825135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.825180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.825514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.825544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.825923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.825952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.826309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.826347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.826728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.826757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.827106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.827136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.827494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.827523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.827883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.827914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.828280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.828309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.828656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.828686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.829064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.829093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.829441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.829471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.829825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.829855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.830106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.830137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.830518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.830549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.830884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.830914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.831319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.831349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.831707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.831736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.832096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.832125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.832471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.832501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.832766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.832795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.833126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.833156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.833601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.833630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.833982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.834011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.834341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-11-20 07:28:27.834373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-11-20 07:28:27.834714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.834743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.835007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.835035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.835384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.835415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.835768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.835798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.836146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.836187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.836569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.836598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.837027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.837056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.837423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.837455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.837796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.837825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.838180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.838211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.838569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.838598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.838865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.838893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.839265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.839295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.839651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.839682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.840080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.840110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.840484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.840514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.840861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.840890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.841273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.841304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.841653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.841683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.842044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.842073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.842418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.842449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.842817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.842846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.843225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.843257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.843639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.843668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.844055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.844084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.844353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.844383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.844723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.844752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.845128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.845157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.845530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.845559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.845931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.845960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.846319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.846355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.846720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.846749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.847093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.847123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.847487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.847517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.847880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.847910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.848281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.848312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.848668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.848702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-11-20 07:28:27.848966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-11-20 07:28:27.848995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.849342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.849373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.849598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.849626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.849955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.849984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.850228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.850259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.850595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.850624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.850988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.851017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.851371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.851402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.851755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.851784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.852144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.852186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.852541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.852570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.852906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.852935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.853293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.853325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.853696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.853725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.853981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.854013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.854363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.854394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.854763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.854792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.855155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.855205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.855533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.855562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.855889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.855919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.856267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.856298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.856645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.856674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.857035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.857064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.857406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.857437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.857814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.857842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.858203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.858233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.858464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.858506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.858861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.858890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.859247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.859279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.859654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.859683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.860045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.860074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.860441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.860473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.860835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.860863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.861214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.861246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.861595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.861624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.861990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.862019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.862375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.862405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.862760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-11-20 07:28:27.862789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-11-20 07:28:27.863155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.863196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.863508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.863537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.863683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.846 [2024-11-20 07:28:27.863731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.846 [2024-11-20 07:28:27.863739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.846 [2024-11-20 07:28:27.863747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.846 [2024-11-20 07:28:27.863753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.846 [2024-11-20 07:28:27.863915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.863946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.864345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.864376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.864734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.864764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.865094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.865122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.865364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.865395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.865657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:05.846 [2024-11-20 07:28:27.865772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.865805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.865806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:05.846 [2024-11-20 07:28:27.865965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:05.846 [2024-11-20 07:28:27.865965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:05.846 [2024-11-20 07:28:27.866182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.866213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.866542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.866572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.866828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.866858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.867213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.867244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.867607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.867643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.867967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.867996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.868375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.868406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.868780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.868809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.869194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.869225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.869629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.869660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.869907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.869935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.870315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.870346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.870730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.870760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.870882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.870914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.871297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.871328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.871724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.871754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.872121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.872150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.872330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.872359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.872746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.872776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.873147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.873188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.873529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-11-20 07:28:27.873558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-11-20 07:28:27.873937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.873967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.874199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.874231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.874462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.874494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.874858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.874888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.875149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.875190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.875494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.875524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.875874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.875903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.876134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.876176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.876526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.876556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.876915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.876944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.877317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.877354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.877710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.877739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.878107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.878137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.878503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.878533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.878899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.878928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.879278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.879309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.879680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.879710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.879939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.879973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.880207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.880239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.880635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.880665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.880903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.880933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.881230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.881263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.881664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.881693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.881943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.881974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.882334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.882365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.882591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.882621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.882992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.883021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.883355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.883386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.883756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.883785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.884147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.884189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.884521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.884551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.884925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.884955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.885313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.885343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.885666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.885696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.886052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.886081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.886447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.886479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.886830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.886860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.887218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.887256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-11-20 07:28:27.887630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-11-20 07:28:27.887660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.887910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.887939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.888293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.888325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.888675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.888705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.889061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.889091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.889482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.889514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.889786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.889815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.890184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.890215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.890582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.890612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.890964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.890994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.891340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.891371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.891748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.891778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.892156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.892197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.892590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.892621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.893039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.893070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.893423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.893454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.893798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.893827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.894215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.894246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.894607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.894636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.895026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.895055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.895395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.895427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.895646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.895675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.895922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.895952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.896343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.896376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.896607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.896637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.896900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.896932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.897183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.897249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.897515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.897547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.897787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.897817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.898033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.898062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.898213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.898243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.898649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.898680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.899020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.899050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.899386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.899417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.899776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.899806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.900129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.900170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.900448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.900477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.900723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.900753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-11-20 07:28:27.901119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-11-20 07:28:27.901149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.901538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.901569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.901815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.901850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.902206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.902239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.902484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.902514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.902795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.902825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.903177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.903208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.903428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.903459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.903801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.903830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.904075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.904108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.904449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.904480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.904714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.904746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.905074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.905110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.905265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.905296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.905635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.905666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.905910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.905939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.906303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.906335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.906734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.906763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.907023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.907053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.907422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.907453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.907853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.907883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.908254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.908284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.908650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.908681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.909040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.909070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.909433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.909464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.909832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.909863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.910098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.910129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.910560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.910592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.910951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.910979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.911215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.911252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.911638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.911674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.912017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.912046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.912429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.912460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.912739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.912768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.912993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.913028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.913234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.913264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.913590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.913620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.913989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.914017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-11-20 07:28:27.914367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-11-20 07:28:27.914397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.914741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.914770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.915092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.915121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.915364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.915395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.915656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.915685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.915915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.915944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.916310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.916342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.916707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.916737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.917092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.917121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.917468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.917499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.917725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.917756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.917993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.918021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.918385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.918416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.918773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.918805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.918923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.918952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.919311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.919342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.919577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.919606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.919864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.919895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.920240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.920279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.920502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.920531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.920879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.920909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.921294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.921325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.921561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.921590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.921853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.921883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.922087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.922117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.922376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.922407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.922820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.922851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.923212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.923244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.923631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.923660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.924027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.924057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.924412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.924444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.924806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.924836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.925216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.925249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.925472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.925501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-11-20 07:28:27.925915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-11-20 07:28:27.925944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.926293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.926324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.926644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.926675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.926897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.926924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.927168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.927199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.927465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.927495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.927915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.927943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.928200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.928232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.928539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.928570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.928817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.928850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.929192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.929226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.929596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.929627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.929993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.930022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.930437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.930468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.930798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.930828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.931219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.931251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.931625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.931655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.932023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.932053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.932380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.932410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.932644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.932673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.932884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.932915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.933318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.933350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.933677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.933707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.934053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.934082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.934444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.934474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.934705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.934736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.935173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.935206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.935577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.935609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.935949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.935979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.936361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.936392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.936721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.936751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.936981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.937010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.937309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.937341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.937713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.937741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.937944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.937972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.938330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.938361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.938588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.938617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.938985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.939014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-11-20 07:28:27.939413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-11-20 07:28:27.939444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.939791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.939821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.940204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.940234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.940610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.940640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.940755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.940787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cc0c0 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Write completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Write completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Write completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Write completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Write completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Write completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Write completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Write completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Write completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 Read completed with error (sct=0, sc=8) 00:30:05.852 starting I/O failed 00:30:05.852 [2024-11-20 07:28:27.941616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:05.852 [2024-11-20 07:28:27.941950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.942014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.942461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.942559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.942878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.942929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.943057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.943088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.943461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.943494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.943730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.943759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.944133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.944176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.944327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.944361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.944697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.944728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.945108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.945138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.945508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.945538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.945797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.945826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.946167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.946200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.946461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.946491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.946815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.946844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.947178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.947211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.947579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.947610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.947981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.948011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.948320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.948351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.948760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.948790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.949147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.949189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.949514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-11-20 07:28:27.949542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-11-20 07:28:27.949951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.949981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.950230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.950261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.950601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.950631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.951053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.951084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.951465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.951496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.951732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.951761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.951972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.952002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.952234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.952266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.952477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.952508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.952749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.952806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.953140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.953179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.953557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.953586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.953732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.953761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.954133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.954170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.954541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.954570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.954790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.954819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.955054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.955083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.955331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.955362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.955589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.955619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.956025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.956053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.956195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.956233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.956595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.956625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.956832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.956866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.957219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.957261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.957594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.957627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.957947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.957979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.958329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.958361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.958731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.958760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.958995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.959024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.959269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.959300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.959622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.959652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.960013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.960042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.960401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.960431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.960682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.960712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.960912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.960943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.961311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.961342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.961571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.961600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.961910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-11-20 07:28:27.961939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-11-20 07:28:27.962284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.962315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.962716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.962746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.963098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.963128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.963537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.963569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.963901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.963931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.964148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.964197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.964571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.964603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.964963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.964992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.965239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.965273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.965626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.965661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.965906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.965934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.966175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.966206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.966307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.966336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.966698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.966728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.967097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.967126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.967530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.967562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.967938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.967969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.968193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.968224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.968619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.968649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.969018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.969048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.969483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.969513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.969860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.969889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.970242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.970280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.970631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.970661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.971023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.971053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.971396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.971429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.971800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.971830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.972166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.972198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.972544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.972572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.972922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.972952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.973302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.973333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.973711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.973739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.973989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.974022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.974275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.974309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.974697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.974727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.975086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.975115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.975508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.975539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-11-20 07:28:27.975903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-11-20 07:28:27.975932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.976269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.976300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.976503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.976530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.976906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.976935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.977285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.977316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.977664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.977693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.977915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.977943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.978207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.978238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.978566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.978596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.978917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.978948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.979332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.979364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.979737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.979768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.980137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.980175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.980499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.980528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.980692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.980721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.980935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.980963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.981186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.981215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.981590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.981620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.981976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.982005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.982380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.982409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.982768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.982797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.983113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.983144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.983504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.983534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.983884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.983913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.984291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.984322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.984640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.984675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.984901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.984931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.985316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.985348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.985714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.985745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.986108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.986138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.986493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.986523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.986928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.986959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.987337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.987367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.855 [2024-11-20 07:28:27.987575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.855 [2024-11-20 07:28:27.987603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.855 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.987966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.987996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.988210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.988240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.988488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.988517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.988744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.988773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.989019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.989049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.989307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.989338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.989706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.989736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.990095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.990125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.990392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.990429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.990790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.990818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.991146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.991190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.991551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.991581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.991913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.991942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.992295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.992325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.992668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.992697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.992960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.992990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.993362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.993394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.993766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.993795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.994174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.994207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.994532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.994561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.994884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.994914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.995300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.995331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.995714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.995745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.996092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.996123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.996504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.996536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.996918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.996949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.997336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.997368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.997698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.997727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.997932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.997962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.998283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.998314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.998660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.998689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.998982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.999017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.999385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.999416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:27.999739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:27.999769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:28.000128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:28.000166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:28.000527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:28.000557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:28.000783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:28.000817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.856 [2024-11-20 07:28:28.001107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.856 [2024-11-20 07:28:28.001138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.856 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.001539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.001570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.001903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.001931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.002301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.002331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.002666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.002695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.002958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.002988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.003327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.003360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.003578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.003609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.003935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.003967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.004332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.004364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.004724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.004754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.004974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.005003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.005329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.005361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.005720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.005749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.006116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.006146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.006369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.006400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.006611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.006640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.006884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.006914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.007026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.007055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6568000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.007305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1e00 is same with the state(6) to be set 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Write completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Write completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Write completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Write completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Write completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Write completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Write completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Write completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Write completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Write completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Read completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 Write completed with error (sct=0, sc=8) 00:30:05.857 starting I/O failed 00:30:05.857 [2024-11-20 07:28:28.008089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.857 [2024-11-20 07:28:28.008643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.008726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.009068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.009091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.009495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.009566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.009938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.009959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.010410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.010482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.010706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.010726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.011081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.011098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.857 qpair failed and we were unable to recover it. 00:30:05.857 [2024-11-20 07:28:28.011300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.857 [2024-11-20 07:28:28.011318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.011658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.011676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.012038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.012055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.012273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.012292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.012639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.012656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.012997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.013015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.013326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.013344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.013571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.013590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.013938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.013954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.014273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.014291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.014626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.014643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.014856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.014872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.015082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.015100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.015433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.015450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.015793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.015815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.016113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.016131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.016313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.016332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.016530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.016547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.016762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.016780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.016979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.016996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.017334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.017352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.017731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.017750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.018120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.018137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.018356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.018373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.018721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.018738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.019077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.019095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.019434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.019451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.019798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.019815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.020175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.020193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.020535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.020551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.020891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.020923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.021101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.021117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.021321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.021338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.021644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.021661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.021976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.021991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.022329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.022346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.022673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.022689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.023028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.858 [2024-11-20 07:28:28.023043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.858 qpair failed and we were unable to recover it. 00:30:05.858 [2024-11-20 07:28:28.023428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.023444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.023788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.023803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.024119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.024135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.024531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.024547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.024805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.024820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.024980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.024995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.025353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.025369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.025719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.025735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.025929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.025944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.026289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.026306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.026622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.026639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.026936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.026951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.027306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.027322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.027387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.027402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.027698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.027713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.028059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.028074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.028425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.028448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.028784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.028800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.029108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.029125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.029453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.029472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.029782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.029801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.030090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.030106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.030452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.030470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.030817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.030834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.031153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.031177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.031519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.031535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.031742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.031759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.031986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.032003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.032330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.032348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.032700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.032716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.033075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.033092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.033451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.033468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.033772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.033787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.034111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.034127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.034450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.034467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.034786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.034802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.035146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.035168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.035522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.859 [2024-11-20 07:28:28.035537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.859 qpair failed and we were unable to recover it. 00:30:05.859 [2024-11-20 07:28:28.035759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.035776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.036145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.036166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.036391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.036406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.036740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.036756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.036968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.036984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.037301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.037318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.037656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.037671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.037998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.038013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.038393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.038409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.038726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.038743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.039132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.039149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.039372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.039388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.039697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.039713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.040060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.040075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.040388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.040404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.040741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.040756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.041068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.041083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.041439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.041455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.041781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.041800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.041863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.041878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.042096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.042111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.042468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.042484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.042795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.042811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.043171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.043187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.043443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.043458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.043800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.043816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.044170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.044187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.044509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.044524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.044860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.044876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.045231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.045249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.045551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.045568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.045891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.045906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.046239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.860 [2024-11-20 07:28:28.046255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.860 qpair failed and we were unable to recover it. 00:30:05.860 [2024-11-20 07:28:28.046438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.046457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.046772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.046787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.047006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.047022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.047330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.047347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.047528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.047542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.047858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.047873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.048070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.048089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.048425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.048441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.048782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.048799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.049114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.049130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.049477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.049503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.049573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.049590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.049921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.049937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.050288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.050305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.050661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.050677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.050863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.050878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.051212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.051229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.051553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.051569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.051915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.051930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.052128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.052143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.052525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.052541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.052883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.052899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.053114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.053131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.053333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.053350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.053689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.053705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.053899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.053918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.054257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.054273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.054638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.054653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.055003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.055018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.055322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.055337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.055695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.055712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.056016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.056031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.056093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.056108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.056448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.056464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.056767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.056783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.057120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.057137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.057449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.057467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.057811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.861 [2024-11-20 07:28:28.057829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.861 qpair failed and we were unable to recover it. 00:30:05.861 [2024-11-20 07:28:28.058024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.058039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.058389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.058407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.058772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.058791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.059102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.059119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.059419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.059436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.059737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.059755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.060099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.060116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.060431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.060447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.060631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.060646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.060838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.060853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.061192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.061209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.061430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.061445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.061805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.061820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.062131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.062146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.062353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.062371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.062696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.062712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.063015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.063034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.063381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.063399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.063737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.063753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.064054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.064069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.064267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.064284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.064639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.064654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.064859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.064876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.065216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.065233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.065522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.065537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.065896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.065912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.066215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.066232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.066437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.066459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.066744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.066760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.066963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.066978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.067246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.067262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.067625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.067642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.067829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.067844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.068148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.068180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.068530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.068547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.068885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.068901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.069245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.069262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.069622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.862 [2024-11-20 07:28:28.069637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.862 qpair failed and we were unable to recover it. 00:30:05.862 [2024-11-20 07:28:28.069840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.069857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.070196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.070213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.070526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.070551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.070731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.070748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.071041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.071056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.071396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.071414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.071598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.071619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.071996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.072014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.072328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.072345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.072686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.072704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.073053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.073070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.073274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.073292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.073631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.073646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.073831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.073848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.074035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.074053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.074409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.074428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.074693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.074711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.074896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.074913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.075185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.075203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.075540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.075556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.075919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.075934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.076291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.076307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.076616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.076633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.076989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.077004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.077344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.077362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.077697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.077728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.077911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.077927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.078128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.078144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.078495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.078512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.078856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.078879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.079177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.079194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.079385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.079403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.079721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.079741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.080042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.080058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.080403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.080421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.080623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.080638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.080832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.080848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.863 [2024-11-20 07:28:28.080918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.863 [2024-11-20 07:28:28.080935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.863 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.081222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.081239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.081440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.081456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.081833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.081849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.082044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.082060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.082295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.082312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.082616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.082632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.082937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.082953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.083168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.083186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.083528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.083547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.083882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.083897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.084224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.084242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.084579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.084596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.084894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.084911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.085257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.085275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.085492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.085512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.085815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.085833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.086148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.086170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.086504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.086527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.086745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.086762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.086996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.087012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.087339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.087355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.087557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.087572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.087929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.087949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.088288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.088306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.088580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.088596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.088797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.088814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.089175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.089192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.089521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.089538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.089850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.089866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.090203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.090220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.090626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.090642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.091010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.091030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.091247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.091264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-11-20 07:28:28.091611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.864 [2024-11-20 07:28:28.091627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.091858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.091875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.092204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.092222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.092565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.092582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.092935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.092952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.093291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.093307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.093660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.093677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.094016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.094034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.094255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.094272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.094448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.094467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.094819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.094837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.095179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.095196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.095265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.095281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.095634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.095653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.096013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.096031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.096381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.096398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.096745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.096763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.097133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.097150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.097349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.097365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.097697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.097715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.097966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.097982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.098324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.098342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.098679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.098695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.098993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.099010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.099329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.099346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.099606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.099623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.099835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.099853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.100204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.100221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.100541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.100558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.100807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.100823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-11-20 07:28:28.101126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-11-20 07:28:28.101144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:05.865 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.101504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.101524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.101874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.101892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.102241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.102260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.102492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.102509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.102806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.102821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.103039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.103059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.103296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.103312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.103656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.103678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.104030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.104045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.104231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.104248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.104474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.104492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.104841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.104857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.105164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.105180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.105506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.105521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.105872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.105896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.106125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.106140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.106506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.106526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.106671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.106685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.107002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.141 [2024-11-20 07:28:28.107027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.141 qpair failed and we were unable to recover it. 00:30:06.141 [2024-11-20 07:28:28.107327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.107343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.107686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.107701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.108052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.108069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.108422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.108439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.108640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.108655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.108848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.108863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.109191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.109208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.109506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.109521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.109874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.109891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.110244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.110262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.110523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.110539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.110888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.110904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.111201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.111216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.111530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.111546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.111850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.111865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.112201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.112218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.112418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.112434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.112789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.112806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.113172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.113188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.113533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.113549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.113890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.113905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.114229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.114245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.114437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.114452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.114737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.114753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.115064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.115081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.115404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.115421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.115748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.115763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.116105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.116134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.116487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.116508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.116768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.116783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.117142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.117169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.142 qpair failed and we were unable to recover it. 00:30:06.142 [2024-11-20 07:28:28.117506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.142 [2024-11-20 07:28:28.117521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.117860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.117891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.118117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.118133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.118329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.118346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.118543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.118561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.118979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.118996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.119312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.119329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.119643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.119659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.119964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.119979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.120333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.120349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.120738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.120755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.121092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.121107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.121299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.121317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.121659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.121676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.122024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.122039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.122350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.122366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.122731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.122746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.123081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.123097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.123280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.123297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.123650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.123665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.124004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.124021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.124218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.124235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.124630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.124645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.124969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.124985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.125221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.125240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.125502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.125516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.125830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.125846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.126040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.126058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.126406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.126421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.126634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.126650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.126866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.143 [2024-11-20 07:28:28.126885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.143 qpair failed and we were unable to recover it. 00:30:06.143 [2024-11-20 07:28:28.127139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.127154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.127483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.127499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.127693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.127708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.128063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.128080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.128409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.128426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.128625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.128640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.128831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.128846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.129195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.129211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.129402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.129420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.129597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.129613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.129804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.129820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.130014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.130029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.130209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.130224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.130551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.130566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.130916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.130932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.131271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.131288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.131609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.131625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.131952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.131968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.132197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.132215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.132563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.132579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.132782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.132797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.133099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.133116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.133422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.133437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.133733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.133748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.134097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.134113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.134435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.134450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.134749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.134765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.135069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.135087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.135340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.135356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.135747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.135763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.144 [2024-11-20 07:28:28.135953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.144 [2024-11-20 07:28:28.135970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.144 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.136311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.136327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.136538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.136553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.136894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.136914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.137263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.137280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.137620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.137636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.137813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.137830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.138133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.138151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.138509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.138525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.138878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.138894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.139101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.139118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.139425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.139443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.139511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.139526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.139840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.139855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.140209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.140227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.140571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.140587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.140835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.140851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.141190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.141206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.141506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.141522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.141713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.141730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.142025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.142043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.142474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.142493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.142714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.142732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.143087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.143105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.143450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.143467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.143694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.143711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.144033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.144051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.144234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.144252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.144441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.144457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.144662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.144678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.144865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.144883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.145114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.145131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.145352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.145370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.145 [2024-11-20 07:28:28.145715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.145 [2024-11-20 07:28:28.145731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.145 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.145928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.145946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.146239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.146256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.146621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.146636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.147084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.147100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.147398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.147414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.147780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.147797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.148097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.148112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.148454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.148470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.148816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.148831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.149174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.149194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.149433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.149449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.149786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.149801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.150154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.150176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.150537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.150554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.150905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.150922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.151255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.151273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.151641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.151658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.151994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.152010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.152370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.152386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.152573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.152589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.152801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.152817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.153046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.153063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.153275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.153291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.153406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.146 [2024-11-20 07:28:28.153422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.146 qpair failed and we were unable to recover it. 00:30:06.146 [2024-11-20 07:28:28.153603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.153618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.153802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.153833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.154136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.154151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.154508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.154524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.154866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.154883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.155189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.155207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.155555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.155571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.155975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.155991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.156321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.156338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.156660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.156675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.156998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.157014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.157314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.157330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.157540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.157556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.157917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.157932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.158166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.158183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.158521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.158537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.158864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.158880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.159084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.159104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.159309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.159325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.159560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.159576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.159663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.159679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.159964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.159983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.160287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.160303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.160640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.160656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.160864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.160880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.161104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.161124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.161373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.161392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.161618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.161634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.161860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.161877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.162117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.162135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.147 [2024-11-20 07:28:28.162347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.147 [2024-11-20 07:28:28.162366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.147 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.162582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.162597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.162935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.162951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.163293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.163309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.163497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.163512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.163751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.163767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.164049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.164065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.164360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.164376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.164564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.164581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.164794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.164811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.165173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.165190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.165558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.165574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.165646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.165660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.165782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.165797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.166189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.166209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.166548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.166578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.166893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.166910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.167114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.167131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.167333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.167349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.167558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.167573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.167845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.167861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.168042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.168061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.168268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.168284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.168582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.168598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.168925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.168942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.169127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.169142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.169496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.169512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.169749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.169766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.170114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.170130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.170450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.170466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.170798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.170814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.171047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.171063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.171275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.148 [2024-11-20 07:28:28.171292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.148 qpair failed and we were unable to recover it. 00:30:06.148 [2024-11-20 07:28:28.171600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.171615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.171809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.171826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.172164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.172184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.172338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.172354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.172752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.172768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.173126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.173142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.173472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.173489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.173662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.173677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.173883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.173899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.174086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.174106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.174312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.174328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.174627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.174645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.174824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.174840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.175187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.175203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.175401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.175421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.175739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.175755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.175977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.175994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.176191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.176208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.176519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.176535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.176862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.176877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.177228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.177245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.177569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.177586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.177649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.177664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.177872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.177888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.178234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.178251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.178576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.178592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.178800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.178815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.179015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.179030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.179383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.179399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.179763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.149 [2024-11-20 07:28:28.179778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.149 qpair failed and we were unable to recover it. 00:30:06.149 [2024-11-20 07:28:28.179967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.179983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.180190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.180207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.180539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.180556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.180739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.180754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.180961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.180976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.181187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.181204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.181505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.181521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.181714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.181729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.182035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.182052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.182371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.182390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.182604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.182623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.182847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.182864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.183049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.183074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.183457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.183474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.183776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.183793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.184088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.184105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.184334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.184351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.184572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.184589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.184950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.184968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.185039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.185055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.185428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.185445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.185738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.185754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.185943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.185960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.186308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.186324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.186542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.186559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.186772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.186790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.186972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.186991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.187291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.187308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.187667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.187683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.187992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.188008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.188212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.150 [2024-11-20 07:28:28.188228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.150 qpair failed and we were unable to recover it. 00:30:06.150 [2024-11-20 07:28:28.188410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.188426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.188849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.188867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.189064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.189082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.189338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.189355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.189423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.189437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.189801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.189816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.190007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.190023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.190382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.190399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.190591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.190608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.190810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.190829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.191182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.191200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.191506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.191521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.191780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.191795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.192085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.192101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.192285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.192301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.192610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.192625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.192811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.192826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.193035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.193051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.193356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.193372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.193602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.193619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.193955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.193970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.194185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.194205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.194386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.194402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.194639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.194656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.195011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.195026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.195360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.195377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.195637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.195653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.195847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.195864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.196215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.196231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.196342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.196358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.196667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.196682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.197028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.151 [2024-11-20 07:28:28.197044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.151 qpair failed and we were unable to recover it. 00:30:06.151 [2024-11-20 07:28:28.197252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.197267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.197359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.197374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.197683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.197698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.198012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.198029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.198393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.198410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.198649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.198664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.199027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.199042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.199386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.199402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.199603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.199622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.199942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.199958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.200275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.200292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.200657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.200673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.201016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.201031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.201243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.201259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.201473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.201488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.201684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.201701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.202055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.202071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.202397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.202415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.202617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.202633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.202837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.202853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.203208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.203224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.203540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.203556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.203618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.203632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.203933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.203949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.204250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.204266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.152 [2024-11-20 07:28:28.204572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.152 [2024-11-20 07:28:28.204587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.152 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.204788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.204806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.204982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.204999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.205224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.205241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.205471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.205490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.205817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.205834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.206212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.206228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.206430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.206445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.206636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.206651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.207006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.207023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.207324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.207340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.207698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.207713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.208041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.208057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.208432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.208448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.208781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.208797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.209018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.209036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.209370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.209388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.209686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.209702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.209784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.209801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.210121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.210138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.210493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.210510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.210695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.210711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.211052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.211068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.211389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.211408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.211598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.211614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.211807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.211825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.212174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.212192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.212383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.212397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.212790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.212807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.213105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.213121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.213443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.213458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-11-20 07:28:28.213817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.153 [2024-11-20 07:28:28.213834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.214176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.214193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.214383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.214401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.214723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.214739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.214952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.214969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.215180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.215196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.215460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.215476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.215838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.215854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.216091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.216107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.216416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.216433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.216785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.216800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.217009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.217025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.217369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.217385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.217593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.217614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.217837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.217853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.218075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.218091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.218315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.218332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.218679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.218694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.218878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.218894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.219234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.219251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.219603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.219619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.219917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.219932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.220116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.220132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.220230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.220249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.220555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.220573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.220909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.220924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.221254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.221269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.221636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.221653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.221992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.222008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.222219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.222235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.222586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.222601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.223006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.223023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.223224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.223241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.223497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.223521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.223861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.154 [2024-11-20 07:28:28.223877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-11-20 07:28:28.224095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.224109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.224345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.224361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.224544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.224560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.224875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.224892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.225240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.225258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.225370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.225385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.225563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.225580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.225902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.225918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.226234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.226251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.226651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.226667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.226966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.226983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.227183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.227200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.227397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.227412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.227490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.227505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.227699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.227713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.228034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.228049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.228237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.228253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.228605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.228620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.228918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.228948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.229020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.229035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.229377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.229393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.229742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.229759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.230100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.230116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.230308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.230324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.230667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.230683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.231024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.231039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.231338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.231354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.231532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.231548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.231846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.231863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.232170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.232186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.232426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.232442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.232664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.155 [2024-11-20 07:28:28.232679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-11-20 07:28:28.233015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.233032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.233343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.233360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.233579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.233594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.233924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.233941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.234234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.234251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.234321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.234336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.234709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.234724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.235028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.235044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.235410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.235426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.235604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.235619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.235801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.235817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.236179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.236196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.236410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.236425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.236787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.236802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.237139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.237155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.237385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.237402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.237473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.237490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.237781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.237796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.238142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.238173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.238491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.238508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.238697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.238712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.239024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.239041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.239368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.239385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.239683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.239698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.240024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.240040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.240346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.240363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.240702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.240728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.241070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.241087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.241322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.241340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.241548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.241564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.241923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.241940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.156 [2024-11-20 07:28:28.242140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.156 [2024-11-20 07:28:28.242175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.156 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.242524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.242541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.242897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.242915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.243235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.243253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.243609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.243625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.243985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.244000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.244305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.244321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.244622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.244637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.244942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.244959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.245282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.245299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.245515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.245531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.245843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.245859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.246172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.246188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.246526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.246542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.246848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.246864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.247060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.247076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.247424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.247440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.247746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.247763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.247963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.247980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.248327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.248344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.248574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.248589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.248904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.248920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.249117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.249134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.249487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.249503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.249852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.249875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.250215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.250233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.250416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.157 [2024-11-20 07:28:28.250431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.157 qpair failed and we were unable to recover it. 00:30:06.157 [2024-11-20 07:28:28.250640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.250655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.250872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.250887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.251250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.251267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.251475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.251491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.251799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.251814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.252151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.252174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.252418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.252433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.252731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.252747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.252986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.253006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.253293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.253309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.253683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.253698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.254047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.254064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.254402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.254420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.254746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.254762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.255126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.255150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.255485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.255514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.255723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.255739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.256106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.256121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.256475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.256493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.256692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.256707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.256896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.256911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.257209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.257225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.257575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.257590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.257784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.257799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.258123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.258138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.258451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.258470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.258773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.258790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.259106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.259122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.259375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.259391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.259604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.259618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.158 [2024-11-20 07:28:28.259822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.158 [2024-11-20 07:28:28.259838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.158 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.260195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.260210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.260418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.260433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.260786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.260801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.261142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.261162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.261348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.261363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.261729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.261744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.262045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.262061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.262400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.262417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.262615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.262630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.263003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.263021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.263382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.263398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.263731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.263748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.263938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.263957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.264144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.264166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.264485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.264502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.264709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.264724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.264964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.264979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.265273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.265293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.265489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.265505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.265908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.265923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.266235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.266251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.266580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.266596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.266931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.266947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.267293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.267310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.267706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.267723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.267787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.267802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.268105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.268121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.268423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.268440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.268668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.268684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.268887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.268902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.159 qpair failed and we were unable to recover it. 00:30:06.159 [2024-11-20 07:28:28.269239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.159 [2024-11-20 07:28:28.269255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.269585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.269600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.269947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.269965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.270314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.270331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.270514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.270529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.270884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.270900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.271248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.271264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.271569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.271584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.271941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.271958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.272259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.272276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.272543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.272558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.272862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.272877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.273212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.273228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.273574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.273590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.273794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.273809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.274021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.274036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.274372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.274390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.274587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.274602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.274794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.274811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.275167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.275185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.275507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.275522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.275858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.275873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.276227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.276243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.276610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.276628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.276926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.276941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.277125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.277140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.277410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.277439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.277747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.277762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.277990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.278006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.278316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.278332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.278552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.278570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.278913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.278929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.279143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.160 [2024-11-20 07:28:28.279167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.160 qpair failed and we were unable to recover it. 00:30:06.160 [2024-11-20 07:28:28.279366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.279382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.279682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.279699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.279917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.279933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.280142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.280163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.280517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.280532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.280736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.280753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.281121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.281138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.281465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.281482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.281830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.281847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.282197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.282214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.282575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.282590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.282888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.282903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.283234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.283249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.283581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.283597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.283900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.283916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.284116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.284131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.284475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.284491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.284833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.284848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.285076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.285092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.285360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.285375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.285562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.285577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.285948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.285970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.286305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.286321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.286551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.286566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.286786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.286801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.287151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.287173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.287528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.287543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.287891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.287908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.288282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.288300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.288639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.288655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.288868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.288884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.289084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.289099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.289309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.289325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.289542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.289560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.161 [2024-11-20 07:28:28.289888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.161 [2024-11-20 07:28:28.289904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.161 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.290225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.290242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.290590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.290606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.290915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.290930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.291284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.291300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.291636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.291651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.291853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.291868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.292213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.292228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.292568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.292585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.292886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.292902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.293120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.293136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.293411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.293427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.293744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.293759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.294057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.294072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.294381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.294398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.294738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.294753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.295099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.295117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.295464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.295482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.295773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.295789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.296136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.296152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.296505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.296520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.296821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.296848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.297059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.297075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.297301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.297318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.297618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.297633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.297850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.297866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.298232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.298248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.298584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.298603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.298943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.162 [2024-11-20 07:28:28.298959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.162 qpair failed and we were unable to recover it. 00:30:06.162 [2024-11-20 07:28:28.299268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.299284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.299494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.299512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.299855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.299871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.300215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.300231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.300553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.300577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.300900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.300915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.301250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.301266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.301492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.301509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.301722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.301737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.302096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.302121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.302305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.302323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.302646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.302661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.302886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.302901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.303109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.303142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.303469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.303486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.303832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.303848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.304157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.304179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.304380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.304396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.304742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.304757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.305100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.305115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.305394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.305409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.305631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.305647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.306012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.306028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.306326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.306341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.306684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.306699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.163 qpair failed and we were unable to recover it. 00:30:06.163 [2024-11-20 07:28:28.306770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.163 [2024-11-20 07:28:28.306784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.306964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.306979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.307332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.307349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.307699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.307714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.308015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.308031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.308246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.308263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.308582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.308597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.308951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.308967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.309272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.309289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.309603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.309618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.309969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.309984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.310321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.310337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.310654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.310671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.310884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.310903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.311260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.311276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.311627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.311642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.311868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.311884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.312240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.312256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.312463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.312478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.312789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.312806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.313020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.313040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.313255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.313271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.313429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.313445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.313790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.313805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.314150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.314170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.314479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.314494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.314691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.314706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.315067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.315084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.315317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.315333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.164 qpair failed and we were unable to recover it. 00:30:06.164 [2024-11-20 07:28:28.315677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.164 [2024-11-20 07:28:28.315692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.316007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.316023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.316241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.316270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.316612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.316627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.316811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.316828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.317179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.317195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.317420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.317435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.317765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.317781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.318124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.318138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.318403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.318419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.318618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.318633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.318830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.318846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.319025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.319040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.319324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.319340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.319659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.319675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.319869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.319885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.320183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.320202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.320527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.320543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.320908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.320924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.321246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.321262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.321452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.321469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.321643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.321659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.322006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.322023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.322375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.322393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.322655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.322675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.322986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.323001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.323330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.323348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.323551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.323570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.323913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.323930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.324290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.165 [2024-11-20 07:28:28.324307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.165 qpair failed and we were unable to recover it. 00:30:06.165 [2024-11-20 07:28:28.324509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.324526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.324711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.324727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.324924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.324940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.325204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.325221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.325551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.325566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.325691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.325707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.326070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.326087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.326443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.326460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.326678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.326694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.326886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.326905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.327094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.327111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.327461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.327477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.327818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.327834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.328197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.328216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.328591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.328612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.328960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.328979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.329330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.329348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.329704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.329724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.330019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.330037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.330249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.330269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.330568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.330586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.330893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.330912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.331109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.331127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.331532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.331552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.331896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.331913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.332270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.332288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.332627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.332647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.332990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.333010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.333328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.333347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.333548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.333566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.166 [2024-11-20 07:28:28.333930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.166 [2024-11-20 07:28:28.333950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.166 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.334295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.334314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.334381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.334397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.334698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.334716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.335064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.335087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.335430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.335451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.335674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.335691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.336032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.336051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.336341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.336359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.336686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.336706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.337025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.337042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.337341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.337359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.337724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.337742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.337930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.337948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.338186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.338204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.338569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.338587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.338951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.338969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.339154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.339181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.339419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.339438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.339752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.339770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.340113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.340133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.340534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.340554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.340900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.340919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.341231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.341250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.341588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.341608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.341919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.341936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.342043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.342059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.342256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.342275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.342572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.342591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.342812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.342831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.167 qpair failed and we were unable to recover it. 00:30:06.167 [2024-11-20 07:28:28.343124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.167 [2024-11-20 07:28:28.343143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.343505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.343525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.343872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.343892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.344235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.344253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.344598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.344616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.344951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.344968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.345153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.345177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.345478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.345497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.345843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.345861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.346168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.346187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.346585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.346604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.346906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.346922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.347258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.347276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.347634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.347654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.348000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.348022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.348325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.348343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.348641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.348658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.348871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.348888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.349228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.349247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.349622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.349642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.349935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.349954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.350145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.350170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.350370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.350388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.350726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.350742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.351080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.351101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.351400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.351418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.351767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.351787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.352102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.352122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.352456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.352474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.352686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.352704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.352994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.353012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.353336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.353356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.168 [2024-11-20 07:28:28.353540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.168 [2024-11-20 07:28:28.353557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.168 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.353801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.353821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.354111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.354131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.354437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.354455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.354797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.354813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.355175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.355195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.355371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.355386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.355453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.355470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.355821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.355839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.356131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.356148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.356494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.356515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.356854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.356872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.357222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.357242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.357583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.357600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.357938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.357957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.358299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.358318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.358657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.358679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.358876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.358894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.359238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.359258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.359574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.359591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.359888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.359905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.360262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.360280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.360599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.360621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.360971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.360990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.361339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.169 [2024-11-20 07:28:28.361359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.169 qpair failed and we were unable to recover it. 00:30:06.169 [2024-11-20 07:28:28.361709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.361727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.362019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.362037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.362370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.362387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.362748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.362769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.362983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.363004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.363338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.363357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.363725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.363743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.364087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.364104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.364466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.364484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.364695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.364711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.364911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.364929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.365117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.365134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.365483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.365504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.365573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.365589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.365915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.365932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.366118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.366135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.366445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.366464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.366807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.366827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.367039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.367057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.367368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.367388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.367693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.367713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.367892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.367910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.368253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.368271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.368621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.368639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.368982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.369002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.369335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.369354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.369668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.369687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.370024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.370043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.370389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.370408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.370767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.370785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.371096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.371116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.371499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.371518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.371868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.170 [2024-11-20 07:28:28.371888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.170 qpair failed and we were unable to recover it. 00:30:06.170 [2024-11-20 07:28:28.372184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.372203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.372388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.372405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.372707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.372726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.373067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.373086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.373398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.373420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.373781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.373800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.374149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.374173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.374353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.374370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.374579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.374598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.374936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.374955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.375263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.375282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.375471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.375488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.375784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.375802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.376013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.376032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.376379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.376397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.376734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.376753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.377094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.377112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.377452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.377470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.377812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.377829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.378170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.378191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.378561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.378578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.378923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.378943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.379150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.379178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.379526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.379545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.379888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.379908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.380119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.380139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.380472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.380490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.380833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.380852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.381187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.381207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.381422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.381439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.381674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.381690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.381881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.381910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.171 [2024-11-20 07:28:28.382134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.171 [2024-11-20 07:28:28.382151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.171 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.382487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.382505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.382841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.382860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.383204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.383224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.383470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.383488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.383822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.383842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.384031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.384051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.384306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.384325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.384673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.384691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.385035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.385054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.385252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.385274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.385634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.385653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.385958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.385983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.386277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.386296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.386639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.386658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.386992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.387010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.387323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.387341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.387691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.387708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.388053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.388073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.388411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.388430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.388644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.388661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.388999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.389016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.389323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.389342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.389683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.389701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.389883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.389900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.390205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.390225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.390436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.390454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.390793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.390812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.391151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.391175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.391493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.391511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.391821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.172 [2024-11-20 07:28:28.391839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.172 qpair failed and we were unable to recover it. 00:30:06.172 [2024-11-20 07:28:28.392147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.392181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.392480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.392497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.392592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.392607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.392971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.392988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.393180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.393197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.393396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.393415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.393762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.393786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.393965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.393984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.394338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.394357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.394579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.394599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.394895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.394913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.395211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.395229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.395561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.395577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.395900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.395918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.396264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.396282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.396496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.396513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.396842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.396858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.397183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.397200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.397394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.397410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.397766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.397784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.397976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.397993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.398241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.398262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.398615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.398633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.398974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.398991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.399201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.399219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.399455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.399472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.399763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.399781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.400129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.400146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.400483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.400501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.173 [2024-11-20 07:28:28.400717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.173 [2024-11-20 07:28:28.400736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.173 qpair failed and we were unable to recover it. 00:30:06.449 [2024-11-20 07:28:28.401080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.449 [2024-11-20 07:28:28.401102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.449 qpair failed and we were unable to recover it. 00:30:06.449 [2024-11-20 07:28:28.401404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.449 [2024-11-20 07:28:28.401423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.449 qpair failed and we were unable to recover it. 00:30:06.449 [2024-11-20 07:28:28.401641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.449 [2024-11-20 07:28:28.401659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.449 qpair failed and we were unable to recover it. 00:30:06.449 [2024-11-20 07:28:28.402009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.449 [2024-11-20 07:28:28.402027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.449 qpair failed and we were unable to recover it. 00:30:06.449 [2024-11-20 07:28:28.402229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.449 [2024-11-20 07:28:28.402246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.449 qpair failed and we were unable to recover it. 00:30:06.449 [2024-11-20 07:28:28.402609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.449 [2024-11-20 07:28:28.402627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.449 qpair failed and we were unable to recover it. 00:30:06.449 [2024-11-20 07:28:28.402692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.449 [2024-11-20 07:28:28.402707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.449 qpair failed and we were unable to recover it. 00:30:06.449 [2024-11-20 07:28:28.403011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.449 [2024-11-20 07:28:28.403033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.449 qpair failed and we were unable to recover it. 00:30:06.449 [2024-11-20 07:28:28.403243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.449 [2024-11-20 07:28:28.403262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.449 qpair failed and we were unable to recover it. 00:30:06.449 [2024-11-20 07:28:28.403471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.449 [2024-11-20 07:28:28.403489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.449 qpair failed and we were unable to recover it. 00:30:06.449 [2024-11-20 07:28:28.403782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.449 [2024-11-20 07:28:28.403800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.404097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.404116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.404338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.404358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.404697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.404716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.405072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.405092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.405410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.405429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.405773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.405792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.405976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.405994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.406212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.406231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.406541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.406559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.406734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.406750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.407045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.407063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.407263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.407281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.407650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.407667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.408016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.408034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.408380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.408397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.408694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.408712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.408923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.408940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.409242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.409262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.409561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.409579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.409886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.409903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.410291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.410315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.410497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.410516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.410855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.410873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.411100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.411119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.411490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.411508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.411862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.411882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.412082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.412099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.412467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.412485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.412827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.450 [2024-11-20 07:28:28.412847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.450 qpair failed and we were unable to recover it. 00:30:06.450 [2024-11-20 07:28:28.413154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.413181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.413383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.413400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.413582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.413598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.413942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.413961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.414027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.414042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.414275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.414293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.414515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.414537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.414912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.414929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.415230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.415248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.415475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.415495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.415882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.415901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.416239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.416257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.416636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.416653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.416840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.416857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.417296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.417314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.417537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.417554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.417857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.417876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.418213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.418231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.418528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.418546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.418899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.418917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.419238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.419256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.419626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.419644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.419707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.419721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.420050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.420070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.420250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.420269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.420454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.420473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.420655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.420674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.421022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.421040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.421385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.421404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.421744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.421763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.421947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.421965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.451 qpair failed and we were unable to recover it. 00:30:06.451 [2024-11-20 07:28:28.422274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.451 [2024-11-20 07:28:28.422298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.422485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.422505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.422744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.422762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.423172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.423190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.423557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.423575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.423878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.423895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.424075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.424092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.424204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.424220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.424555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.424573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.424774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.424793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.424969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.424990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.425339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.425358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.425558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.425574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.425784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.425801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.426150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.426172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.426525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.426543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.426730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.426750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.427171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.427192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.427557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.427576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.427917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.427935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.428280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.428297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.428601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.428618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.428957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.428976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.429173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.429192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.429532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.429550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.429892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.429912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.430212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.430232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.430526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.430544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.430883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.430901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.431244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.431265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.431607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.452 [2024-11-20 07:28:28.431628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.452 qpair failed and we were unable to recover it. 00:30:06.452 [2024-11-20 07:28:28.431832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.431849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.432206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.432225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.432556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.432574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.432958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.432975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.433329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.433349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.433705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.433727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.433992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.434010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.434203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.434222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.434429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.434449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.434772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.434793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.434986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.435003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.435339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.435359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.435569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.435586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.435799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.435818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.436172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.436191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.436523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.436542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.436880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.436898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.437085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.437103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.437451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.437469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.437822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.437841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.438044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.438064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.438470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.438491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.438789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.438807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.439102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.439123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.439434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.439453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.439642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.439661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.439968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.439988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.440352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.440371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.440582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.440600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.440952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.440971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.441262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.441280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.441497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.453 [2024-11-20 07:28:28.441514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.453 qpair failed and we were unable to recover it. 00:30:06.453 [2024-11-20 07:28:28.441705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.441724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.442072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.442090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.442443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.442462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.442803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.442821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.443195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.443220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.443553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.443573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.443931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.443951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.444139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.444167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.444510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.444527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.444874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.444894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.445227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.445247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.445622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.445642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.445873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.445892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.445961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.445976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.446263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.446281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.446588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.446606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.446948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.446967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.447321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.447340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.447533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.447552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.447842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.447862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.447938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.447954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.448275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.448293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.448593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.448611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.448954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.448974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.449272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.449292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.449589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.449607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.449803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.449819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.450119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.450138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.450348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.450367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.450697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.450717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.451061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.451079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.451435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.451455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.454 [2024-11-20 07:28:28.451808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.454 [2024-11-20 07:28:28.451825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.454 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.452022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.452040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.452136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.452152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.452359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.452378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.452752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.452770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.452954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.452973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.453280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.453298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.453606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.453624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.453845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.453862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.454221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.454240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.454551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.454569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.454939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.454957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.455178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.455201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.455396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.455413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.455618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.455636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.455746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.455762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.456110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.456128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.456323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.456343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.456722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.456740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.456923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.456943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.457139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.457169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.457517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.457538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.457887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.457905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.458107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.458125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.455 [2024-11-20 07:28:28.458320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.455 [2024-11-20 07:28:28.458340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.455 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.458676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.458696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.458885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.458903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.459080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.459097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.459293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.459311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.459629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.459647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.459947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.459965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.460319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.460338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.460688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.460707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.460922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.460941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.461254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.461272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.461455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.461474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.461696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.461714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.461982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.462000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.462219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.462239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.462512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.462531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.462716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.462734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.463043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.463062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.463409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.463430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.463782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.463802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.464148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.464173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.464540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.464559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.464919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.464936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.465284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.465304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.465665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.465683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.465874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.465893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.466089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.466109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.466455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.466475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.466824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.466848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.467216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.467235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.467609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.467627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.467847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.467866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.468211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.468230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.468579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.468600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.456 [2024-11-20 07:28:28.468968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.456 [2024-11-20 07:28:28.468987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.456 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.469376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.469396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.469584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.469604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.469892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.469910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.470234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.470254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.470452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.470470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.470795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.470817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.471034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.471055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.471277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.471295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.471480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.471498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.471840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.471858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.472234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.472254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.472323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.472339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.472644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.472661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.472998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.473017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.473365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.473387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.473597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.473616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.473810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.473828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.474014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.474031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.474228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.474246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.474432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.474450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.474811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.474829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.475011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.475033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.475385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.475406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.475754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.475772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.475990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.476007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.476349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.476369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.476554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.476571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.476906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.476925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.477271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.477293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.477655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.457 [2024-11-20 07:28:28.477674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.457 qpair failed and we were unable to recover it. 00:30:06.457 [2024-11-20 07:28:28.478019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.478037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.478334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.478352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.478413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.478430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.478790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.478812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.479144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.479172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.479531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.479550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.479897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.479917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.480231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.480250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.480594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.480611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.480844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.480862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.481095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.481112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.481433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.481451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.481793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.481812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.482114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.482133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.482316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.482335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.482713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.482733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.483072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.483090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.483275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.483294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.483655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.483674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.484012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.484031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.484330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.484349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.484697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.484715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.485005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.485022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.485339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.485357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.485710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.485728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.486009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.486026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.486375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.486394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.486574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.486592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.486966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.486984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.487344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.487364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.487727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.458 [2024-11-20 07:28:28.487745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.458 qpair failed and we were unable to recover it. 00:30:06.458 [2024-11-20 07:28:28.488089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.488108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.488294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.488313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.488651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.488669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.489017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.489037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.489336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.489355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.489544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.489561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.489914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.489932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.490229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.490247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.490604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.490621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.490885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.490901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.491246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.491264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.491564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.491581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.491934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.491955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.492248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.492267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.492442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.492460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.492825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.492843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.493074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.493091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.493425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.493445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.493767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.493786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.494072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.494089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.494282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.494299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.494614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.494633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.494977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.494995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.495189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.495208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.495430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.495448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.495777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.495796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.496144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.496177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.496528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.496546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.496758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.496777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.497109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.497127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.459 [2024-11-20 07:28:28.497545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.459 [2024-11-20 07:28:28.497564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.459 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.497751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.497769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.498061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.498080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.498391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.498409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.498601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.498620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.498959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.498977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.499275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.499293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.499581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.499600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.499941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.499960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.500305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.500324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.500671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.500689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.500889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.500909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.501201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.501218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.501429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.501448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.501787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.501804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.502146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.502177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.502398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.502415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.502699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.502718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.502918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.502937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.503246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.503265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.503625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.503644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.503939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.503958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.504250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.504273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.504524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.504542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.460 [2024-11-20 07:28:28.504842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.460 [2024-11-20 07:28:28.504860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.460 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.505203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.505221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.505440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.505457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.505670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.505688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.505875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.505894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.506220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.506238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.506455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.506473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.506694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.506715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.507059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.507077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.507275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.507293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.507399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.507415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.507754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.507772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.507990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.508009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.508337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.508356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.508679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.508698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.508878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.508897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.509232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.509250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.509597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.509614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.509798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.509815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.510176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.510195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.510505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.510525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.510878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.510897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.511248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.511267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.511615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.511633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.511823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.511840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.512200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.512218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.512400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.512416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.512601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.512618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.512946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.461 [2024-11-20 07:28:28.512964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.461 qpair failed and we were unable to recover it. 00:30:06.461 [2024-11-20 07:28:28.513144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.513169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.513575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.513595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.513940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.513959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.514147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.514171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.514503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.514521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.514862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.514881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.515094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.515114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.515414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.515432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.515788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.515809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.516022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.516045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.516453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.516472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.516688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.516707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.517040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.517058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.517402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.517421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.517760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.517779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.518138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.518170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.518473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.518492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.518792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.518812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.519019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.519037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.519414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.519432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.519742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.519763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.520101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.520120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.520451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.520470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.520817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.520838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:30:06.462 [2024-11-20 07:28:28.521216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.521236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:06.462 [2024-11-20 07:28:28.521586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.521608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.521801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:06.462 [2024-11-20 07:28:28.521821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.522022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.522044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.462 [2024-11-20 07:28:28.522354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.522374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.522729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.522748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.523092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.523110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.462 qpair failed and we were unable to recover it. 00:30:06.462 [2024-11-20 07:28:28.523452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.462 [2024-11-20 07:28:28.523473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.523765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.523782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.524123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.524141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.524499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.524518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.524714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.524732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.525017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.525035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.525403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.525424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.525745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.525763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.525948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.525965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.526269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.526289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.526645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.526665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.527000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.527020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.527323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.527342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.527703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.527723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.528022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.528040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.528229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.528246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.528540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.528561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.528849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.528867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.528936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.528951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.529278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.529296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.529488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.529507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.529701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.529718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.530010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.530028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.530326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.530344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.530552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.530571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.530901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.530920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.531226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.531246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.531439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.531459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.531750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.531769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.532101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.532118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.532422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.532442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.532831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.532850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.533171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.533189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.533422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.463 [2024-11-20 07:28:28.533441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.463 qpair failed and we were unable to recover it. 00:30:06.463 [2024-11-20 07:28:28.533787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.533805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.534113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.534132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.534571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.534591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.534811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.534828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.535172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.535193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.535528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.535548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.535908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.535928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.536228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.536248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.536589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.536607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.536907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.536925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.537225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.537244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.537582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.537599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.537814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.537831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.538058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.538077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.538376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.538399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.538743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.538764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.539104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.539125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.539461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.539480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.539816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.539834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.540051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.540069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.540420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.540439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.540756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.540776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.541072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.541095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.541275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.541295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.541517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.541536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.541832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.541850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.542145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.542171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.542545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.542564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.542899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.542921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.543269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.543287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.543512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.543530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.543730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.543752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.544090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.544108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.464 [2024-11-20 07:28:28.544431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.464 [2024-11-20 07:28:28.544450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.464 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.544754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.544774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.545095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.545115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.545418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.545439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.545799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.545820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.546168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.546187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.546491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.546509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.546867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.546885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.547073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.547090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.547386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.547405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.547702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.547721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.548058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.548078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.548280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.548301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.548648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.548667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.549005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.549024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.549330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.549349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.549707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.549727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.550023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.550041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.550257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.550276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.550624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.550643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.550979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.550998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.551347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.551367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.551655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.551673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.552009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.552030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.552367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.552391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.552593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.552613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.552826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.552842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.553189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.553207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.553428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.553446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.553790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.465 [2024-11-20 07:28:28.553812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.465 qpair failed and we were unable to recover it. 00:30:06.465 [2024-11-20 07:28:28.554168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.554188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.554382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.554400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.554743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.554763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.555100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.555118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.555331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.555351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.555652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.555670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.556007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.556027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.556328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.556347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.556694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.556713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.557059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.557078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.557268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.557287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.557643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.557663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.557869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.557886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.558263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.558283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.558608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.558626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.558835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.558854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.559104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.559122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.559357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.559376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.559659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.559676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.560025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.560044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.560381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.560401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.560736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.560754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.560971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.560989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.561282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.561299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.561614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.561637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.561832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.561850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.562186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.562207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.562394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.562413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.562589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.466 [2024-11-20 07:28:28.562607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-11-20 07:28:28.562812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.562830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.563175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.563193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.563533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.563554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.563855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.563873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.564213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.564233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.467 [2024-11-20 07:28:28.564523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.564545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.564608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.564625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.564800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.564820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.467 [2024-11-20 07:28:28.565116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.565138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.467 [2024-11-20 07:28:28.565347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.565367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.467 [2024-11-20 07:28:28.565661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.565681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.566020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.566040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.566378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.566398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.566614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.566632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.566984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.567002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.567208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.567226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.567422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.567441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.567798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.567816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.568022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.568041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.568380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.568399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.568745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.568763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.568961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.568979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.569192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.569208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.569543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.569561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.569903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.569922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.570273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.570291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.570596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.570614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.570956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.570973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.571266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.571284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.571354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.571371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.571582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.571600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-11-20 07:28:28.571811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.467 [2024-11-20 07:28:28.571829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.572125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.572143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.572513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.572531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.572865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.572885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.573233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.573253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.573594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.573611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.573958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.573976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.574315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.574332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.574624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.574641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.574858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.574876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.575181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.575199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.575399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.575416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.575754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.575773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.575953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.575970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.576310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.576330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.576672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.576689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.577052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.577071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.577413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.577435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.577780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.577799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.578144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.578177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.578478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.578496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.578692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.578709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.579051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.579069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.579406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.579423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.579764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.579783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.579956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.579973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.580175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.580194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.580536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.580554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.580888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.580906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.581223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.581240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.581597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.581614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.581965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.581986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.582292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.582310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.582524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.582542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-11-20 07:28:28.582879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-11-20 07:28:28.582897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.583084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.583101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.583402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.583420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.583772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.583792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.584021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.584039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.584376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.584393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.584745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.584763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.585109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.585128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.585384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.585401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.585741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.585760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.586093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.586112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.586424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.586442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.586787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.586807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.587150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.587176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.587553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.587570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.587882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.587901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.588224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.588242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.588585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.588605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.588797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.588815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.589151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.589171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.589538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.589556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.589897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.589916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.590224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.590248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.590648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.590671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.591005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.591024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.591322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.591341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.591527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.591544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.591888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.591906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.592124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.592141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.592438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.592457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.592764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.592780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.593131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.593150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.593350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.593369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.593709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-11-20 07:28:28.593728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.469 qpair failed and we were unable to recover it. 00:30:06.469 [2024-11-20 07:28:28.594023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.594041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.594336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.594354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.594730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.594749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.595051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.595070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.595403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.595423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.595764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.595783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.596126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.596145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.596371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.596390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.596603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.596620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.596945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.596964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.597301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.597319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.597680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.597700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.597993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.598011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.598327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.598346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.598704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.598721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.599063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.599084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.599296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.599315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.599667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.599684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.600022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.600040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.600377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.600397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.600735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.600752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.601095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.601115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.601428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.601446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.601797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 Malloc0 00:30:06.470 [2024-11-20 07:28:28.601816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.602155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.602180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.602491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.602511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.470 [2024-11-20 07:28:28.602863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.602881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:06.470 [2024-11-20 07:28:28.603077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.603096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.470 [2024-11-20 07:28:28.603387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.470 [2024-11-20 07:28:28.603414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.470 qpair failed and we were unable to recover it. 00:30:06.470 [2024-11-20 07:28:28.603606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.603626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.471 [2024-11-20 07:28:28.603954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.603973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.604326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.604343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.604708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.604726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.605092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.605111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.605448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.605466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.605807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.605826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.606057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.606075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.606315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.606332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.606545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.606562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.606904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.606922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.607280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.607301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.607505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.607523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.607864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.607881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.608220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.608238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.608593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.608611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.608847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.471 [2024-11-20 07:28:28.608903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.608920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.609245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.609264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.609630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.609646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.609984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.610004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.610225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.610243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.610579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.610596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.610788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.610807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.611175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.611196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.611326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.611345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.611637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.611655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.611996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.471 [2024-11-20 07:28:28.612015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.471 qpair failed and we were unable to recover it. 00:30:06.471 [2024-11-20 07:28:28.612231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.612250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.612598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.612616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.612956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.612974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.613325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.613345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.613699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.613718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.613947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.613963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.614214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.614232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.614588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.614607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.614950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.614968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.615310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.615329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.615678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.615697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.615886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.615907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.615980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.615995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.616297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.616315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.616665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.616682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.616761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.616778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.617133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.617150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.617477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.617495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.617754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.617772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.472 [2024-11-20 07:28:28.618144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.618172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.472 [2024-11-20 07:28:28.618484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.618502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.472 [2024-11-20 07:28:28.618844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.618864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.472 [2024-11-20 07:28:28.619199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.619217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.619620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.619638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.619980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.619998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.620334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.620353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.620711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.620728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.620907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.620925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.621275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.621294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.621635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.472 [2024-11-20 07:28:28.621654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.472 qpair failed and we were unable to recover it. 00:30:06.472 [2024-11-20 07:28:28.621947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.621966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.622153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.622177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.622509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.622528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.622735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.622752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.623097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.623114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.623423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.623444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.623788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.623809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.624172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.624191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.624553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.624571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.624909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.624927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.625225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.625243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.625614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.625632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.625845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.625863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.626062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.626081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.626383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.626401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.626730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.626749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.627042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.627061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.627257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.627275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.627614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.627631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.627952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.627969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.628272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.628291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.628492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.628509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.628869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.628886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.629243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.629264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.629527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.629545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.629785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.629802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.473 [2024-11-20 07:28:28.630006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.630026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.630246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.630266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.473 [2024-11-20 07:28:28.630485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.630504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.473 [2024-11-20 07:28:28.630841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.630861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.473 [2024-11-20 07:28:28.631168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.631187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.631533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.631552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.631745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.473 [2024-11-20 07:28:28.631762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.473 qpair failed and we were unable to recover it. 00:30:06.473 [2024-11-20 07:28:28.632101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.632119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.632454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.632475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.632814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.632831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.633202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.633221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.633535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.633553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.633897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.633916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.634278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.634298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.634634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.634653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.634989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.635007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.635203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.635219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.635297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.635312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.635519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.635543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.635890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.635907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.636232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.636253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.636482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.636499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.636853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.636873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.637208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.637226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.637447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.637464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.637808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.637825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.638162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.638182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.638537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.638554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.638770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.638787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.639001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.639017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.639332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.639350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.639695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.639713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.639911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.639930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.640148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.640176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.640295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.640312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.640672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.474 [2024-11-20 07:28:28.640690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.474 qpair failed and we were unable to recover it. 00:30:06.474 [2024-11-20 07:28:28.641029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.641047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.641272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.641291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.641604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.641624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.641960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.641979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.475 [2024-11-20 07:28:28.642180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.642200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.642462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.642481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.475 [2024-11-20 07:28:28.642684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.642705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.475 [2024-11-20 07:28:28.643043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.643064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.475 [2024-11-20 07:28:28.643377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.643397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.643863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.643882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.644073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.644093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.644275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.644295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.644630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.644648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.644763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.644781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.645050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.645071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.645387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.645407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.645688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.645705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.646046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.646061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.646275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.646291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.646602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.646617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.646912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.646931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.647316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.647333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.647532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.647548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.647887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.647903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.648117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.648132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.648415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.648431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.648779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.648795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.648863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.475 [2024-11-20 07:28:28.648879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6570000b90 with addr=10.0.0.2, port=4420 00:30:06.475 qpair failed and we were unable to recover it. 00:30:06.475 [2024-11-20 07:28:28.649235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.475 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.475 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:06.475 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.475 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.475 [2024-11-20 07:28:28.660154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.475 [2024-11-20 07:28:28.660263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.475 [2024-11-20 07:28:28.660296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.475 [2024-11-20 07:28:28.660309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.475 [2024-11-20 07:28:28.660321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.476 [2024-11-20 07:28:28.660354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.476 qpair failed and we were unable to recover it. 00:30:06.476 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.476 07:28:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3709661 00:30:06.476 [2024-11-20 07:28:28.669882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.476 [2024-11-20 07:28:28.669967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.476 [2024-11-20 07:28:28.669993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.476 [2024-11-20 07:28:28.670007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.476 [2024-11-20 07:28:28.670019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.476 [2024-11-20 07:28:28.670049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.476 qpair failed and we were unable to recover it. 00:30:06.476 [2024-11-20 07:28:28.679962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.476 [2024-11-20 07:28:28.680047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.476 [2024-11-20 07:28:28.680067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.476 [2024-11-20 07:28:28.680077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.476 [2024-11-20 07:28:28.680085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.476 [2024-11-20 07:28:28.680107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.476 qpair failed and we were unable to recover it. 00:30:06.476 [2024-11-20 07:28:28.689900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.476 [2024-11-20 07:28:28.690009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.476 [2024-11-20 07:28:28.690031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.476 [2024-11-20 07:28:28.690039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.476 [2024-11-20 07:28:28.690047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.476 [2024-11-20 07:28:28.690065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.476 qpair failed and we were unable to recover it. 00:30:06.476 [2024-11-20 07:28:28.699968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.476 [2024-11-20 07:28:28.700041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.476 [2024-11-20 07:28:28.700058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.476 [2024-11-20 07:28:28.700067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.476 [2024-11-20 07:28:28.700074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.476 [2024-11-20 07:28:28.700092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.476 qpair failed and we were unable to recover it. 00:30:06.476 [2024-11-20 07:28:28.709921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.476 [2024-11-20 07:28:28.709989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.476 [2024-11-20 07:28:28.710013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.476 [2024-11-20 07:28:28.710021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.476 [2024-11-20 07:28:28.710028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.476 [2024-11-20 07:28:28.710047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.476 qpair failed and we were unable to recover it. 00:30:06.739 [2024-11-20 07:28:28.719955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.739 [2024-11-20 07:28:28.720049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.739 [2024-11-20 07:28:28.720067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.739 [2024-11-20 07:28:28.720077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.739 [2024-11-20 07:28:28.720084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.739 [2024-11-20 07:28:28.720102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.739 qpair failed and we were unable to recover it. 00:30:06.739 [2024-11-20 07:28:28.729987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.739 [2024-11-20 07:28:28.730061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.739 [2024-11-20 07:28:28.730080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.739 [2024-11-20 07:28:28.730089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.739 [2024-11-20 07:28:28.730096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.739 [2024-11-20 07:28:28.730115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.739 qpair failed and we were unable to recover it. 00:30:06.739 [2024-11-20 07:28:28.740054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.739 [2024-11-20 07:28:28.740130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.739 [2024-11-20 07:28:28.740149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.739 [2024-11-20 07:28:28.740162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.739 [2024-11-20 07:28:28.740171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.739 [2024-11-20 07:28:28.740190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.739 qpair failed and we were unable to recover it. 00:30:06.739 [2024-11-20 07:28:28.750122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.739 [2024-11-20 07:28:28.750197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.739 [2024-11-20 07:28:28.750214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.740 [2024-11-20 07:28:28.750228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.740 [2024-11-20 07:28:28.750235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.740 [2024-11-20 07:28:28.750255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.740 qpair failed and we were unable to recover it. 00:30:06.740 [2024-11-20 07:28:28.760149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.740 [2024-11-20 07:28:28.760232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.740 [2024-11-20 07:28:28.760250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.740 [2024-11-20 07:28:28.760258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.740 [2024-11-20 07:28:28.760265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.740 [2024-11-20 07:28:28.760283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.740 qpair failed and we were unable to recover it. 00:30:06.740 [2024-11-20 07:28:28.770095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.740 [2024-11-20 07:28:28.770206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.740 [2024-11-20 07:28:28.770224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.740 [2024-11-20 07:28:28.770232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.740 [2024-11-20 07:28:28.770239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.740 [2024-11-20 07:28:28.770258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.740 qpair failed and we were unable to recover it. 00:30:06.740 [2024-11-20 07:28:28.780125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.740 [2024-11-20 07:28:28.780243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.740 [2024-11-20 07:28:28.780263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.740 [2024-11-20 07:28:28.780271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.740 [2024-11-20 07:28:28.780279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.740 [2024-11-20 07:28:28.780297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.740 qpair failed and we were unable to recover it. 00:30:06.740 [2024-11-20 07:28:28.790172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.740 [2024-11-20 07:28:28.790242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.740 [2024-11-20 07:28:28.790258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.740 [2024-11-20 07:28:28.790266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.740 [2024-11-20 07:28:28.790274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.740 [2024-11-20 07:28:28.790298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.740 qpair failed and we were unable to recover it. 00:30:06.740 [2024-11-20 07:28:28.800068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.740 [2024-11-20 07:28:28.800125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.740 [2024-11-20 07:28:28.800145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.740 [2024-11-20 07:28:28.800153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.740 [2024-11-20 07:28:28.800245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.740 [2024-11-20 07:28:28.800274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.740 qpair failed and we were unable to recover it. 00:30:06.740 [2024-11-20 07:28:28.810229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.740 [2024-11-20 07:28:28.810300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.740 [2024-11-20 07:28:28.810319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.740 [2024-11-20 07:28:28.810327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.740 [2024-11-20 07:28:28.810334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.740 [2024-11-20 07:28:28.810353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.740 qpair failed and we were unable to recover it. 00:30:06.740 [2024-11-20 07:28:28.820298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.740 [2024-11-20 07:28:28.820379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.740 [2024-11-20 07:28:28.820397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.740 [2024-11-20 07:28:28.820405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.740 [2024-11-20 07:28:28.820412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.740 [2024-11-20 07:28:28.820431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.740 qpair failed and we were unable to recover it. 00:30:06.740 [2024-11-20 07:28:28.830289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.740 [2024-11-20 07:28:28.830355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.740 [2024-11-20 07:28:28.830372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.740 [2024-11-20 07:28:28.830380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.740 [2024-11-20 07:28:28.830387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.740 [2024-11-20 07:28:28.830406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.740 qpair failed and we were unable to recover it. 00:30:06.740 [2024-11-20 07:28:28.840320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.740 [2024-11-20 07:28:28.840395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.740 [2024-11-20 07:28:28.840414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.740 [2024-11-20 07:28:28.840422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.740 [2024-11-20 07:28:28.840429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.740 [2024-11-20 07:28:28.840448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.740 qpair failed and we were unable to recover it. 00:30:06.740 [2024-11-20 07:28:28.850319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.740 [2024-11-20 07:28:28.850433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.740 [2024-11-20 07:28:28.850451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.740 [2024-11-20 07:28:28.850459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.740 [2024-11-20 07:28:28.850466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.740 [2024-11-20 07:28:28.850484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.740 qpair failed and we were unable to recover it. 00:30:06.740 [2024-11-20 07:28:28.860406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.740 [2024-11-20 07:28:28.860485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.740 [2024-11-20 07:28:28.860507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.740 [2024-11-20 07:28:28.860515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.740 [2024-11-20 07:28:28.860528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.740 [2024-11-20 07:28:28.860548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.740 qpair failed and we were unable to recover it. 00:30:06.740 [2024-11-20 07:28:28.870429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.741 [2024-11-20 07:28:28.870493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.741 [2024-11-20 07:28:28.870511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.741 [2024-11-20 07:28:28.870519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.741 [2024-11-20 07:28:28.870526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.741 [2024-11-20 07:28:28.870545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.741 qpair failed and we were unable to recover it. 00:30:06.741 [2024-11-20 07:28:28.880584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.741 [2024-11-20 07:28:28.880650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.741 [2024-11-20 07:28:28.880668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.741 [2024-11-20 07:28:28.880681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.741 [2024-11-20 07:28:28.880688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.741 [2024-11-20 07:28:28.880706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.741 qpair failed and we were unable to recover it. 00:30:06.741 [2024-11-20 07:28:28.890525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.741 [2024-11-20 07:28:28.890603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.741 [2024-11-20 07:28:28.890620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.741 [2024-11-20 07:28:28.890629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.741 [2024-11-20 07:28:28.890636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.741 [2024-11-20 07:28:28.890656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.741 qpair failed and we were unable to recover it. 00:30:06.741 [2024-11-20 07:28:28.900567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.741 [2024-11-20 07:28:28.900644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.741 [2024-11-20 07:28:28.900660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.741 [2024-11-20 07:28:28.900669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.741 [2024-11-20 07:28:28.900676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.741 [2024-11-20 07:28:28.900694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.741 qpair failed and we were unable to recover it. 00:30:06.741 [2024-11-20 07:28:28.910587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.741 [2024-11-20 07:28:28.910659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.741 [2024-11-20 07:28:28.910676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.741 [2024-11-20 07:28:28.910684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.741 [2024-11-20 07:28:28.910691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.741 [2024-11-20 07:28:28.910709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.741 qpair failed and we were unable to recover it. 00:30:06.741 [2024-11-20 07:28:28.920419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.741 [2024-11-20 07:28:28.920478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.741 [2024-11-20 07:28:28.920495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.741 [2024-11-20 07:28:28.920502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.741 [2024-11-20 07:28:28.920509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.741 [2024-11-20 07:28:28.920533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.741 qpair failed and we were unable to recover it. 00:30:06.741 [2024-11-20 07:28:28.930574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.741 [2024-11-20 07:28:28.930641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.741 [2024-11-20 07:28:28.930658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.741 [2024-11-20 07:28:28.930666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.741 [2024-11-20 07:28:28.930673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.741 [2024-11-20 07:28:28.930691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.741 qpair failed and we were unable to recover it. 00:30:06.741 [2024-11-20 07:28:28.940677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.741 [2024-11-20 07:28:28.940755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.741 [2024-11-20 07:28:28.940773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.741 [2024-11-20 07:28:28.940781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.741 [2024-11-20 07:28:28.940788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.741 [2024-11-20 07:28:28.940807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.741 qpair failed and we were unable to recover it. 00:30:06.741 [2024-11-20 07:28:28.950667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.741 [2024-11-20 07:28:28.950728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.741 [2024-11-20 07:28:28.950745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.741 [2024-11-20 07:28:28.950754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.741 [2024-11-20 07:28:28.950760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.741 [2024-11-20 07:28:28.950778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.741 qpair failed and we were unable to recover it. 00:30:06.741 [2024-11-20 07:28:28.960661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.741 [2024-11-20 07:28:28.960724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.741 [2024-11-20 07:28:28.960741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.741 [2024-11-20 07:28:28.960749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.741 [2024-11-20 07:28:28.960757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.741 [2024-11-20 07:28:28.960774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.741 qpair failed and we were unable to recover it. 00:30:06.741 [2024-11-20 07:28:28.970707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.741 [2024-11-20 07:28:28.970800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.741 [2024-11-20 07:28:28.970817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.741 [2024-11-20 07:28:28.970826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.741 [2024-11-20 07:28:28.970834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.741 [2024-11-20 07:28:28.970852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.742 qpair failed and we were unable to recover it. 00:30:06.742 [2024-11-20 07:28:28.980779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.742 [2024-11-20 07:28:28.980857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.742 [2024-11-20 07:28:28.980893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.742 [2024-11-20 07:28:28.980904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.742 [2024-11-20 07:28:28.980912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.742 [2024-11-20 07:28:28.980938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.742 qpair failed and we were unable to recover it. 00:30:06.742 [2024-11-20 07:28:28.990754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.742 [2024-11-20 07:28:28.990824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.742 [2024-11-20 07:28:28.990852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.742 [2024-11-20 07:28:28.990861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.742 [2024-11-20 07:28:28.990868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.742 [2024-11-20 07:28:28.990890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.742 qpair failed and we were unable to recover it. 00:30:06.742 [2024-11-20 07:28:29.000806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.742 [2024-11-20 07:28:29.000868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.742 [2024-11-20 07:28:29.000886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.742 [2024-11-20 07:28:29.000895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.742 [2024-11-20 07:28:29.000902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.742 [2024-11-20 07:28:29.000920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.742 qpair failed and we were unable to recover it. 00:30:06.742 [2024-11-20 07:28:29.010814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.742 [2024-11-20 07:28:29.010882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.742 [2024-11-20 07:28:29.010906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.742 [2024-11-20 07:28:29.010914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.742 [2024-11-20 07:28:29.010921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:06.742 [2024-11-20 07:28:29.010940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.742 qpair failed and we were unable to recover it. 00:30:07.005 [2024-11-20 07:28:29.020885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.005 [2024-11-20 07:28:29.020962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.005 [2024-11-20 07:28:29.020980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.005 [2024-11-20 07:28:29.020989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.005 [2024-11-20 07:28:29.020996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.005 [2024-11-20 07:28:29.021014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.005 qpair failed and we were unable to recover it. 00:30:07.005 [2024-11-20 07:28:29.030892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.005 [2024-11-20 07:28:29.030954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.005 [2024-11-20 07:28:29.030972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.005 [2024-11-20 07:28:29.030980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.005 [2024-11-20 07:28:29.030987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.005 [2024-11-20 07:28:29.031004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.005 qpair failed and we were unable to recover it. 00:30:07.005 [2024-11-20 07:28:29.040898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.005 [2024-11-20 07:28:29.040962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.005 [2024-11-20 07:28:29.040982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.005 [2024-11-20 07:28:29.040989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.005 [2024-11-20 07:28:29.040996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.005 [2024-11-20 07:28:29.041015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.005 qpair failed and we were unable to recover it. 00:30:07.005 [2024-11-20 07:28:29.050959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.005 [2024-11-20 07:28:29.051027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.005 [2024-11-20 07:28:29.051044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.005 [2024-11-20 07:28:29.051052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.005 [2024-11-20 07:28:29.051071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.005 [2024-11-20 07:28:29.051090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.005 qpair failed and we were unable to recover it. 00:30:07.005 [2024-11-20 07:28:29.061009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.005 [2024-11-20 07:28:29.061084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.005 [2024-11-20 07:28:29.061101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.005 [2024-11-20 07:28:29.061109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.005 [2024-11-20 07:28:29.061116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.005 [2024-11-20 07:28:29.061134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.005 qpair failed and we were unable to recover it. 00:30:07.005 [2024-11-20 07:28:29.071013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.005 [2024-11-20 07:28:29.071078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.005 [2024-11-20 07:28:29.071096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.005 [2024-11-20 07:28:29.071104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.005 [2024-11-20 07:28:29.071111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.005 [2024-11-20 07:28:29.071130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.005 qpair failed and we were unable to recover it. 00:30:07.005 [2024-11-20 07:28:29.081032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.005 [2024-11-20 07:28:29.081096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.005 [2024-11-20 07:28:29.081114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.005 [2024-11-20 07:28:29.081123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.005 [2024-11-20 07:28:29.081130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.005 [2024-11-20 07:28:29.081148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.005 qpair failed and we were unable to recover it. 00:30:07.006 [2024-11-20 07:28:29.091108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.006 [2024-11-20 07:28:29.091183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.006 [2024-11-20 07:28:29.091202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.006 [2024-11-20 07:28:29.091210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.006 [2024-11-20 07:28:29.091218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.006 [2024-11-20 07:28:29.091237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.006 qpair failed and we were unable to recover it. 00:30:07.006 [2024-11-20 07:28:29.101130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.006 [2024-11-20 07:28:29.101250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.006 [2024-11-20 07:28:29.101269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.006 [2024-11-20 07:28:29.101277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.006 [2024-11-20 07:28:29.101284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.006 [2024-11-20 07:28:29.101302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.006 qpair failed and we were unable to recover it. 00:30:07.006 [2024-11-20 07:28:29.111118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.006 [2024-11-20 07:28:29.111180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.006 [2024-11-20 07:28:29.111197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.006 [2024-11-20 07:28:29.111206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.006 [2024-11-20 07:28:29.111213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.006 [2024-11-20 07:28:29.111232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.006 qpair failed and we were unable to recover it. 00:30:07.006 [2024-11-20 07:28:29.121155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.006 [2024-11-20 07:28:29.121230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.006 [2024-11-20 07:28:29.121247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.006 [2024-11-20 07:28:29.121255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.006 [2024-11-20 07:28:29.121262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.006 [2024-11-20 07:28:29.121281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.006 qpair failed and we were unable to recover it. 00:30:07.006 [2024-11-20 07:28:29.131209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.006 [2024-11-20 07:28:29.131279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.006 [2024-11-20 07:28:29.131296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.006 [2024-11-20 07:28:29.131304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.006 [2024-11-20 07:28:29.131311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.006 [2024-11-20 07:28:29.131329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.006 qpair failed and we were unable to recover it. 00:30:07.006 [2024-11-20 07:28:29.141180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.006 [2024-11-20 07:28:29.141246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.006 [2024-11-20 07:28:29.141270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.006 [2024-11-20 07:28:29.141278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.006 [2024-11-20 07:28:29.141285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.006 [2024-11-20 07:28:29.141304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.006 qpair failed and we were unable to recover it. 00:30:07.006 [2024-11-20 07:28:29.151209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.006 [2024-11-20 07:28:29.151273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.006 [2024-11-20 07:28:29.151290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.006 [2024-11-20 07:28:29.151299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.006 [2024-11-20 07:28:29.151306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.006 [2024-11-20 07:28:29.151324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.006 qpair failed and we were unable to recover it. 00:30:07.006 [2024-11-20 07:28:29.161264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.006 [2024-11-20 07:28:29.161372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.006 [2024-11-20 07:28:29.161391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.006 [2024-11-20 07:28:29.161399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.006 [2024-11-20 07:28:29.161406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.006 [2024-11-20 07:28:29.161425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.006 qpair failed and we were unable to recover it. 00:30:07.006 [2024-11-20 07:28:29.171317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.006 [2024-11-20 07:28:29.171389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.006 [2024-11-20 07:28:29.171406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.006 [2024-11-20 07:28:29.171414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.006 [2024-11-20 07:28:29.171422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.006 [2024-11-20 07:28:29.171440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.006 qpair failed and we were unable to recover it. 00:30:07.006 [2024-11-20 07:28:29.181303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.006 [2024-11-20 07:28:29.181375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.006 [2024-11-20 07:28:29.181393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.006 [2024-11-20 07:28:29.181402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.006 [2024-11-20 07:28:29.181415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.006 [2024-11-20 07:28:29.181436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.006 qpair failed and we were unable to recover it. 00:30:07.006 [2024-11-20 07:28:29.191376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.006 [2024-11-20 07:28:29.191447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.006 [2024-11-20 07:28:29.191465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.006 [2024-11-20 07:28:29.191473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.006 [2024-11-20 07:28:29.191480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.006 [2024-11-20 07:28:29.191498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.006 qpair failed and we were unable to recover it. 00:30:07.006 [2024-11-20 07:28:29.201402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.006 [2024-11-20 07:28:29.201462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.006 [2024-11-20 07:28:29.201479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.007 [2024-11-20 07:28:29.201486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.007 [2024-11-20 07:28:29.201493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.007 [2024-11-20 07:28:29.201511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.007 qpair failed and we were unable to recover it. 00:30:07.007 [2024-11-20 07:28:29.211420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.007 [2024-11-20 07:28:29.211523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.007 [2024-11-20 07:28:29.211541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.007 [2024-11-20 07:28:29.211549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.007 [2024-11-20 07:28:29.211556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.007 [2024-11-20 07:28:29.211574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.007 qpair failed and we were unable to recover it. 00:30:07.007 [2024-11-20 07:28:29.221497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.007 [2024-11-20 07:28:29.221608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.007 [2024-11-20 07:28:29.221625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.007 [2024-11-20 07:28:29.221634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.007 [2024-11-20 07:28:29.221641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.007 [2024-11-20 07:28:29.221659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.007 qpair failed and we were unable to recover it. 00:30:07.007 [2024-11-20 07:28:29.231479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.007 [2024-11-20 07:28:29.231548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.007 [2024-11-20 07:28:29.231565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.007 [2024-11-20 07:28:29.231574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.007 [2024-11-20 07:28:29.231581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.007 [2024-11-20 07:28:29.231599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.007 qpair failed and we were unable to recover it. 00:30:07.007 [2024-11-20 07:28:29.241489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.007 [2024-11-20 07:28:29.241553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.007 [2024-11-20 07:28:29.241571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.007 [2024-11-20 07:28:29.241579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.007 [2024-11-20 07:28:29.241586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.007 [2024-11-20 07:28:29.241604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.007 qpair failed and we were unable to recover it. 00:30:07.007 [2024-11-20 07:28:29.251555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.007 [2024-11-20 07:28:29.251624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.007 [2024-11-20 07:28:29.251640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.007 [2024-11-20 07:28:29.251648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.007 [2024-11-20 07:28:29.251656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.007 [2024-11-20 07:28:29.251674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.007 qpair failed and we were unable to recover it. 00:30:07.007 [2024-11-20 07:28:29.261592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.007 [2024-11-20 07:28:29.261665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.007 [2024-11-20 07:28:29.261683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.007 [2024-11-20 07:28:29.261691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.007 [2024-11-20 07:28:29.261699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.007 [2024-11-20 07:28:29.261717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.007 qpair failed and we were unable to recover it. 00:30:07.007 [2024-11-20 07:28:29.271591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.007 [2024-11-20 07:28:29.271654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.007 [2024-11-20 07:28:29.271677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.007 [2024-11-20 07:28:29.271685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.007 [2024-11-20 07:28:29.271692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.007 [2024-11-20 07:28:29.271710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.007 qpair failed and we were unable to recover it. 00:30:07.270 [2024-11-20 07:28:29.281623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.270 [2024-11-20 07:28:29.281682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.270 [2024-11-20 07:28:29.281700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.270 [2024-11-20 07:28:29.281708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.270 [2024-11-20 07:28:29.281715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.270 [2024-11-20 07:28:29.281733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.270 qpair failed and we were unable to recover it. 00:30:07.270 [2024-11-20 07:28:29.291677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.270 [2024-11-20 07:28:29.291779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.270 [2024-11-20 07:28:29.291797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.270 [2024-11-20 07:28:29.291805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.270 [2024-11-20 07:28:29.291813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.270 [2024-11-20 07:28:29.291832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.270 qpair failed and we were unable to recover it. 00:30:07.270 [2024-11-20 07:28:29.301707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.270 [2024-11-20 07:28:29.301775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.270 [2024-11-20 07:28:29.301791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.270 [2024-11-20 07:28:29.301800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.270 [2024-11-20 07:28:29.301807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.270 [2024-11-20 07:28:29.301825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.270 qpair failed and we were unable to recover it. 00:30:07.270 [2024-11-20 07:28:29.311726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.270 [2024-11-20 07:28:29.311822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.270 [2024-11-20 07:28:29.311839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.270 [2024-11-20 07:28:29.311854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.270 [2024-11-20 07:28:29.311861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.270 [2024-11-20 07:28:29.311880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.270 qpair failed and we were unable to recover it. 00:30:07.270 [2024-11-20 07:28:29.321755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.270 [2024-11-20 07:28:29.321823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.270 [2024-11-20 07:28:29.321842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.270 [2024-11-20 07:28:29.321855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.270 [2024-11-20 07:28:29.321864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.270 [2024-11-20 07:28:29.321887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.270 qpair failed and we were unable to recover it. 00:30:07.270 [2024-11-20 07:28:29.331770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.270 [2024-11-20 07:28:29.331842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.270 [2024-11-20 07:28:29.331877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.270 [2024-11-20 07:28:29.331887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.270 [2024-11-20 07:28:29.331895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.270 [2024-11-20 07:28:29.331919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.270 qpair failed and we were unable to recover it. 00:30:07.270 [2024-11-20 07:28:29.341851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.270 [2024-11-20 07:28:29.341958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.270 [2024-11-20 07:28:29.341981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.270 [2024-11-20 07:28:29.341991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.270 [2024-11-20 07:28:29.341998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.270 [2024-11-20 07:28:29.342017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.270 qpair failed and we were unable to recover it. 00:30:07.270 [2024-11-20 07:28:29.351850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.270 [2024-11-20 07:28:29.351964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.270 [2024-11-20 07:28:29.351984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.270 [2024-11-20 07:28:29.351992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.270 [2024-11-20 07:28:29.351999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.270 [2024-11-20 07:28:29.352025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.271 qpair failed and we were unable to recover it. 00:30:07.271 [2024-11-20 07:28:29.361864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.271 [2024-11-20 07:28:29.361934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.271 [2024-11-20 07:28:29.361952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.271 [2024-11-20 07:28:29.361960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.271 [2024-11-20 07:28:29.361967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.271 [2024-11-20 07:28:29.361985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.271 qpair failed and we were unable to recover it. 00:30:07.271 [2024-11-20 07:28:29.371899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.271 [2024-11-20 07:28:29.371983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.271 [2024-11-20 07:28:29.372001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.271 [2024-11-20 07:28:29.372009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.271 [2024-11-20 07:28:29.372017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.271 [2024-11-20 07:28:29.372035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.271 qpair failed and we were unable to recover it. 00:30:07.271 [2024-11-20 07:28:29.381961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.271 [2024-11-20 07:28:29.382037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.271 [2024-11-20 07:28:29.382054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.271 [2024-11-20 07:28:29.382061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.271 [2024-11-20 07:28:29.382069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.271 [2024-11-20 07:28:29.382087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.271 qpair failed and we were unable to recover it. 00:30:07.271 [2024-11-20 07:28:29.391930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.271 [2024-11-20 07:28:29.391988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.271 [2024-11-20 07:28:29.392005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.271 [2024-11-20 07:28:29.392013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.271 [2024-11-20 07:28:29.392020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.271 [2024-11-20 07:28:29.392038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.271 qpair failed and we were unable to recover it. 00:30:07.271 [2024-11-20 07:28:29.402014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.271 [2024-11-20 07:28:29.402124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.271 [2024-11-20 07:28:29.402141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.271 [2024-11-20 07:28:29.402149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.271 [2024-11-20 07:28:29.402156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.271 [2024-11-20 07:28:29.402178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.271 qpair failed and we were unable to recover it. 00:30:07.271 [2024-11-20 07:28:29.412006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.271 [2024-11-20 07:28:29.412077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.271 [2024-11-20 07:28:29.412094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.271 [2024-11-20 07:28:29.412103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.271 [2024-11-20 07:28:29.412110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.271 [2024-11-20 07:28:29.412128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.271 qpair failed and we were unable to recover it. 00:30:07.271 [2024-11-20 07:28:29.422076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.271 [2024-11-20 07:28:29.422155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.271 [2024-11-20 07:28:29.422176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.271 [2024-11-20 07:28:29.422184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.271 [2024-11-20 07:28:29.422191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.271 [2024-11-20 07:28:29.422210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.271 qpair failed and we were unable to recover it. 00:30:07.271 [2024-11-20 07:28:29.432084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.271 [2024-11-20 07:28:29.432143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.271 [2024-11-20 07:28:29.432164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.271 [2024-11-20 07:28:29.432173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.271 [2024-11-20 07:28:29.432180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.271 [2024-11-20 07:28:29.432198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.271 qpair failed and we were unable to recover it. 00:30:07.271 [2024-11-20 07:28:29.442089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.271 [2024-11-20 07:28:29.442155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.271 [2024-11-20 07:28:29.442176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.271 [2024-11-20 07:28:29.442190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.271 [2024-11-20 07:28:29.442197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.271 [2024-11-20 07:28:29.442216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.271 qpair failed and we were unable to recover it. 00:30:07.271 [2024-11-20 07:28:29.452088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.271 [2024-11-20 07:28:29.452154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.271 [2024-11-20 07:28:29.452176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.271 [2024-11-20 07:28:29.452183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.271 [2024-11-20 07:28:29.452190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.271 [2024-11-20 07:28:29.452209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.271 qpair failed and we were unable to recover it. 00:30:07.271 [2024-11-20 07:28:29.462146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.271 [2024-11-20 07:28:29.462218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.271 [2024-11-20 07:28:29.462234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.271 [2024-11-20 07:28:29.462242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.271 [2024-11-20 07:28:29.462249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.271 [2024-11-20 07:28:29.462268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.271 qpair failed and we were unable to recover it. 00:30:07.271 [2024-11-20 07:28:29.472168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.271 [2024-11-20 07:28:29.472230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.271 [2024-11-20 07:28:29.472246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.271 [2024-11-20 07:28:29.472254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.271 [2024-11-20 07:28:29.472261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.271 [2024-11-20 07:28:29.472279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.271 qpair failed and we were unable to recover it. 00:30:07.271 [2024-11-20 07:28:29.482205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.271 [2024-11-20 07:28:29.482276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.271 [2024-11-20 07:28:29.482294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.271 [2024-11-20 07:28:29.482302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.271 [2024-11-20 07:28:29.482309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.271 [2024-11-20 07:28:29.482333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.272 qpair failed and we were unable to recover it. 00:30:07.272 [2024-11-20 07:28:29.492245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.272 [2024-11-20 07:28:29.492318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.272 [2024-11-20 07:28:29.492334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.272 [2024-11-20 07:28:29.492342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.272 [2024-11-20 07:28:29.492349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.272 [2024-11-20 07:28:29.492367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.272 qpair failed and we were unable to recover it. 00:30:07.272 [2024-11-20 07:28:29.502330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.272 [2024-11-20 07:28:29.502403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.272 [2024-11-20 07:28:29.502420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.272 [2024-11-20 07:28:29.502427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.272 [2024-11-20 07:28:29.502434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.272 [2024-11-20 07:28:29.502452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.272 qpair failed and we were unable to recover it. 00:30:07.272 [2024-11-20 07:28:29.512283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.272 [2024-11-20 07:28:29.512358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.272 [2024-11-20 07:28:29.512375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.272 [2024-11-20 07:28:29.512383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.272 [2024-11-20 07:28:29.512389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.272 [2024-11-20 07:28:29.512409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.272 qpair failed and we were unable to recover it. 00:30:07.272 [2024-11-20 07:28:29.522339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.272 [2024-11-20 07:28:29.522405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.272 [2024-11-20 07:28:29.522426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.272 [2024-11-20 07:28:29.522438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.272 [2024-11-20 07:28:29.522446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.272 [2024-11-20 07:28:29.522464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.272 qpair failed and we were unable to recover it. 00:30:07.272 [2024-11-20 07:28:29.532324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.272 [2024-11-20 07:28:29.532395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.272 [2024-11-20 07:28:29.532414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.272 [2024-11-20 07:28:29.532422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.272 [2024-11-20 07:28:29.532428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.272 [2024-11-20 07:28:29.532447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.272 qpair failed and we were unable to recover it. 00:30:07.272 [2024-11-20 07:28:29.542448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.272 [2024-11-20 07:28:29.542519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.272 [2024-11-20 07:28:29.542537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.272 [2024-11-20 07:28:29.542546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.272 [2024-11-20 07:28:29.542553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.272 [2024-11-20 07:28:29.542572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.272 qpair failed and we were unable to recover it. 00:30:07.537 [2024-11-20 07:28:29.552420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.537 [2024-11-20 07:28:29.552486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.537 [2024-11-20 07:28:29.552503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.537 [2024-11-20 07:28:29.552511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.537 [2024-11-20 07:28:29.552518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.537 [2024-11-20 07:28:29.552536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.537 qpair failed and we were unable to recover it. 00:30:07.537 [2024-11-20 07:28:29.562363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.537 [2024-11-20 07:28:29.562428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.537 [2024-11-20 07:28:29.562445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.537 [2024-11-20 07:28:29.562453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.537 [2024-11-20 07:28:29.562461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.537 [2024-11-20 07:28:29.562479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.537 qpair failed and we were unable to recover it. 00:30:07.537 [2024-11-20 07:28:29.572470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.537 [2024-11-20 07:28:29.572540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.537 [2024-11-20 07:28:29.572566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.537 [2024-11-20 07:28:29.572574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.537 [2024-11-20 07:28:29.572581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.537 [2024-11-20 07:28:29.572599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.537 qpair failed and we were unable to recover it. 00:30:07.537 [2024-11-20 07:28:29.582584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.537 [2024-11-20 07:28:29.582664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.537 [2024-11-20 07:28:29.582681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.537 [2024-11-20 07:28:29.582690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.537 [2024-11-20 07:28:29.582697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.537 [2024-11-20 07:28:29.582715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.537 qpair failed and we were unable to recover it. 00:30:07.537 [2024-11-20 07:28:29.592551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.537 [2024-11-20 07:28:29.592619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.537 [2024-11-20 07:28:29.592636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.537 [2024-11-20 07:28:29.592644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.537 [2024-11-20 07:28:29.592651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.537 [2024-11-20 07:28:29.592668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.537 qpair failed and we were unable to recover it. 00:30:07.537 [2024-11-20 07:28:29.602562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.537 [2024-11-20 07:28:29.602625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.537 [2024-11-20 07:28:29.602644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.537 [2024-11-20 07:28:29.602653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.537 [2024-11-20 07:28:29.602660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.537 [2024-11-20 07:28:29.602677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.537 qpair failed and we were unable to recover it. 00:30:07.537 [2024-11-20 07:28:29.612527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.537 [2024-11-20 07:28:29.612595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.537 [2024-11-20 07:28:29.612612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.537 [2024-11-20 07:28:29.612621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.537 [2024-11-20 07:28:29.612633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.537 [2024-11-20 07:28:29.612651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.537 qpair failed and we were unable to recover it. 00:30:07.537 [2024-11-20 07:28:29.622555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.537 [2024-11-20 07:28:29.622627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.537 [2024-11-20 07:28:29.622645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.537 [2024-11-20 07:28:29.622653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.537 [2024-11-20 07:28:29.622660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.537 [2024-11-20 07:28:29.622678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.537 qpair failed and we were unable to recover it. 00:30:07.537 [2024-11-20 07:28:29.632644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.537 [2024-11-20 07:28:29.632709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.537 [2024-11-20 07:28:29.632726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.537 [2024-11-20 07:28:29.632734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.537 [2024-11-20 07:28:29.632741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.537 [2024-11-20 07:28:29.632759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.537 qpair failed and we were unable to recover it. 00:30:07.537 [2024-11-20 07:28:29.642709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.537 [2024-11-20 07:28:29.642770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.537 [2024-11-20 07:28:29.642789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.538 [2024-11-20 07:28:29.642797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.538 [2024-11-20 07:28:29.642804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.538 [2024-11-20 07:28:29.642822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.538 qpair failed and we were unable to recover it. 00:30:07.538 [2024-11-20 07:28:29.652750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.538 [2024-11-20 07:28:29.652819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.538 [2024-11-20 07:28:29.652838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.538 [2024-11-20 07:28:29.652846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.538 [2024-11-20 07:28:29.652853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.538 [2024-11-20 07:28:29.652870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.538 qpair failed and we were unable to recover it. 00:30:07.538 [2024-11-20 07:28:29.662785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.538 [2024-11-20 07:28:29.662863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.538 [2024-11-20 07:28:29.662882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.538 [2024-11-20 07:28:29.662890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.538 [2024-11-20 07:28:29.662898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.538 [2024-11-20 07:28:29.662916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.538 qpair failed and we were unable to recover it. 00:30:07.538 [2024-11-20 07:28:29.672859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.538 [2024-11-20 07:28:29.672957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.538 [2024-11-20 07:28:29.672973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.538 [2024-11-20 07:28:29.672981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.538 [2024-11-20 07:28:29.672990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.538 [2024-11-20 07:28:29.673008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.538 qpair failed and we were unable to recover it. 00:30:07.538 [2024-11-20 07:28:29.682836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.538 [2024-11-20 07:28:29.682895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.538 [2024-11-20 07:28:29.682912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.538 [2024-11-20 07:28:29.682920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.538 [2024-11-20 07:28:29.682927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.538 [2024-11-20 07:28:29.682945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.538 qpair failed and we were unable to recover it. 00:30:07.538 [2024-11-20 07:28:29.692857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.538 [2024-11-20 07:28:29.692925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.538 [2024-11-20 07:28:29.692942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.538 [2024-11-20 07:28:29.692951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.538 [2024-11-20 07:28:29.692958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.538 [2024-11-20 07:28:29.692976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.538 qpair failed and we were unable to recover it. 00:30:07.538 [2024-11-20 07:28:29.702933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.538 [2024-11-20 07:28:29.703047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.538 [2024-11-20 07:28:29.703069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.538 [2024-11-20 07:28:29.703077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.538 [2024-11-20 07:28:29.703084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.538 [2024-11-20 07:28:29.703102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.538 qpair failed and we were unable to recover it. 00:30:07.538 [2024-11-20 07:28:29.712901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.538 [2024-11-20 07:28:29.712980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.538 [2024-11-20 07:28:29.712998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.538 [2024-11-20 07:28:29.713006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.538 [2024-11-20 07:28:29.713013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.538 [2024-11-20 07:28:29.713031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.538 qpair failed and we were unable to recover it. 00:30:07.538 [2024-11-20 07:28:29.722943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.538 [2024-11-20 07:28:29.723014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.538 [2024-11-20 07:28:29.723031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.538 [2024-11-20 07:28:29.723040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.538 [2024-11-20 07:28:29.723047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.538 [2024-11-20 07:28:29.723064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.538 qpair failed and we were unable to recover it. 00:30:07.538 [2024-11-20 07:28:29.732977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.538 [2024-11-20 07:28:29.733048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.538 [2024-11-20 07:28:29.733065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.538 [2024-11-20 07:28:29.733074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.538 [2024-11-20 07:28:29.733080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.538 [2024-11-20 07:28:29.733098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.538 qpair failed and we were unable to recover it. 00:30:07.538 [2024-11-20 07:28:29.743032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.538 [2024-11-20 07:28:29.743116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.538 [2024-11-20 07:28:29.743133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.538 [2024-11-20 07:28:29.743141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.538 [2024-11-20 07:28:29.743166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.538 [2024-11-20 07:28:29.743186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.538 qpair failed and we were unable to recover it. 00:30:07.538 [2024-11-20 07:28:29.752974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.538 [2024-11-20 07:28:29.753048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.538 [2024-11-20 07:28:29.753066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.538 [2024-11-20 07:28:29.753074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.538 [2024-11-20 07:28:29.753081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.538 [2024-11-20 07:28:29.753099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.538 qpair failed and we were unable to recover it. 00:30:07.538 [2024-11-20 07:28:29.763080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.538 [2024-11-20 07:28:29.763151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.538 [2024-11-20 07:28:29.763173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.538 [2024-11-20 07:28:29.763181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.538 [2024-11-20 07:28:29.763188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.538 [2024-11-20 07:28:29.763205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.538 qpair failed and we were unable to recover it. 00:30:07.538 [2024-11-20 07:28:29.773107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.538 [2024-11-20 07:28:29.773185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.538 [2024-11-20 07:28:29.773202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.539 [2024-11-20 07:28:29.773210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.539 [2024-11-20 07:28:29.773217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.539 [2024-11-20 07:28:29.773235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.539 qpair failed and we were unable to recover it. 00:30:07.539 [2024-11-20 07:28:29.783139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.539 [2024-11-20 07:28:29.783223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.539 [2024-11-20 07:28:29.783241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.539 [2024-11-20 07:28:29.783249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.539 [2024-11-20 07:28:29.783255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.539 [2024-11-20 07:28:29.783274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.539 qpair failed and we were unable to recover it. 00:30:07.539 [2024-11-20 07:28:29.793132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.539 [2024-11-20 07:28:29.793199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.539 [2024-11-20 07:28:29.793218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.539 [2024-11-20 07:28:29.793226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.539 [2024-11-20 07:28:29.793233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.539 [2024-11-20 07:28:29.793251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.539 qpair failed and we were unable to recover it. 00:30:07.539 [2024-11-20 07:28:29.803145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.539 [2024-11-20 07:28:29.803247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.539 [2024-11-20 07:28:29.803265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.539 [2024-11-20 07:28:29.803275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.539 [2024-11-20 07:28:29.803282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.539 [2024-11-20 07:28:29.803299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.539 qpair failed and we were unable to recover it. 00:30:07.803 [2024-11-20 07:28:29.813206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.803 [2024-11-20 07:28:29.813277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.803 [2024-11-20 07:28:29.813295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.803 [2024-11-20 07:28:29.813304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.803 [2024-11-20 07:28:29.813313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.803 [2024-11-20 07:28:29.813331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.803 qpair failed and we were unable to recover it. 00:30:07.803 [2024-11-20 07:28:29.823324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.803 [2024-11-20 07:28:29.823403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.803 [2024-11-20 07:28:29.823422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.803 [2024-11-20 07:28:29.823430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.803 [2024-11-20 07:28:29.823437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.803 [2024-11-20 07:28:29.823457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.803 qpair failed and we were unable to recover it. 00:30:07.803 [2024-11-20 07:28:29.833272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.803 [2024-11-20 07:28:29.833350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.803 [2024-11-20 07:28:29.833368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.803 [2024-11-20 07:28:29.833379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.803 [2024-11-20 07:28:29.833386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.803 [2024-11-20 07:28:29.833406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.803 qpair failed and we were unable to recover it. 00:30:07.803 [2024-11-20 07:28:29.843273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.803 [2024-11-20 07:28:29.843350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.803 [2024-11-20 07:28:29.843397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.803 [2024-11-20 07:28:29.843408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.803 [2024-11-20 07:28:29.843417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.803 [2024-11-20 07:28:29.843451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.803 qpair failed and we were unable to recover it. 00:30:07.803 [2024-11-20 07:28:29.853319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.803 [2024-11-20 07:28:29.853387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.803 [2024-11-20 07:28:29.853407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.803 [2024-11-20 07:28:29.853415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.803 [2024-11-20 07:28:29.853422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.803 [2024-11-20 07:28:29.853442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.803 qpair failed and we were unable to recover it. 00:30:07.803 [2024-11-20 07:28:29.863406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.803 [2024-11-20 07:28:29.863497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.803 [2024-11-20 07:28:29.863514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.803 [2024-11-20 07:28:29.863522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.803 [2024-11-20 07:28:29.863531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.803 [2024-11-20 07:28:29.863549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.803 qpair failed and we were unable to recover it. 00:30:07.803 [2024-11-20 07:28:29.873398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.803 [2024-11-20 07:28:29.873480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.803 [2024-11-20 07:28:29.873498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.803 [2024-11-20 07:28:29.873513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.803 [2024-11-20 07:28:29.873519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.803 [2024-11-20 07:28:29.873538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.803 qpair failed and we were unable to recover it. 00:30:07.803 [2024-11-20 07:28:29.883445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.803 [2024-11-20 07:28:29.883516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.803 [2024-11-20 07:28:29.883534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.803 [2024-11-20 07:28:29.883542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.803 [2024-11-20 07:28:29.883549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.803 [2024-11-20 07:28:29.883567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.803 qpair failed and we were unable to recover it. 00:30:07.803 [2024-11-20 07:28:29.893467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.803 [2024-11-20 07:28:29.893533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.803 [2024-11-20 07:28:29.893550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.803 [2024-11-20 07:28:29.893558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.803 [2024-11-20 07:28:29.893566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.803 [2024-11-20 07:28:29.893584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.803 qpair failed and we were unable to recover it. 00:30:07.803 [2024-11-20 07:28:29.903491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.803 [2024-11-20 07:28:29.903576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.803 [2024-11-20 07:28:29.903594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.803 [2024-11-20 07:28:29.903601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.803 [2024-11-20 07:28:29.903608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.804 [2024-11-20 07:28:29.903627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.804 qpair failed and we were unable to recover it. 00:30:07.804 [2024-11-20 07:28:29.913544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.804 [2024-11-20 07:28:29.913606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.804 [2024-11-20 07:28:29.913624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.804 [2024-11-20 07:28:29.913632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.804 [2024-11-20 07:28:29.913638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.804 [2024-11-20 07:28:29.913661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.804 qpair failed and we were unable to recover it. 00:30:07.804 [2024-11-20 07:28:29.923432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.804 [2024-11-20 07:28:29.923493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.804 [2024-11-20 07:28:29.923510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.804 [2024-11-20 07:28:29.923518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.804 [2024-11-20 07:28:29.923524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.804 [2024-11-20 07:28:29.923542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.804 qpair failed and we were unable to recover it. 00:30:07.804 [2024-11-20 07:28:29.933574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.804 [2024-11-20 07:28:29.933645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.804 [2024-11-20 07:28:29.933662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.804 [2024-11-20 07:28:29.933670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.804 [2024-11-20 07:28:29.933677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.804 [2024-11-20 07:28:29.933695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.804 qpair failed and we were unable to recover it. 00:30:07.804 [2024-11-20 07:28:29.943634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.804 [2024-11-20 07:28:29.943762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.804 [2024-11-20 07:28:29.943781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.804 [2024-11-20 07:28:29.943790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.804 [2024-11-20 07:28:29.943797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.804 [2024-11-20 07:28:29.943817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.804 qpair failed and we were unable to recover it. 00:30:07.804 [2024-11-20 07:28:29.953626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.804 [2024-11-20 07:28:29.953694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.804 [2024-11-20 07:28:29.953712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.804 [2024-11-20 07:28:29.953720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.804 [2024-11-20 07:28:29.953727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.804 [2024-11-20 07:28:29.953746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.804 qpair failed and we were unable to recover it. 00:30:07.804 [2024-11-20 07:28:29.963641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.804 [2024-11-20 07:28:29.963739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.804 [2024-11-20 07:28:29.963757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.804 [2024-11-20 07:28:29.963766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.804 [2024-11-20 07:28:29.963773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.804 [2024-11-20 07:28:29.963792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.804 qpair failed and we were unable to recover it. 00:30:07.804 [2024-11-20 07:28:29.973696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.804 [2024-11-20 07:28:29.973770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.804 [2024-11-20 07:28:29.973787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.804 [2024-11-20 07:28:29.973794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.804 [2024-11-20 07:28:29.973802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.804 [2024-11-20 07:28:29.973820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.804 qpair failed and we were unable to recover it. 00:30:07.804 [2024-11-20 07:28:29.983631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.804 [2024-11-20 07:28:29.983697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.804 [2024-11-20 07:28:29.983714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.804 [2024-11-20 07:28:29.983722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.804 [2024-11-20 07:28:29.983729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.804 [2024-11-20 07:28:29.983747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.804 qpair failed and we were unable to recover it. 00:30:07.804 [2024-11-20 07:28:29.993733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.804 [2024-11-20 07:28:29.993803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.804 [2024-11-20 07:28:29.993820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.804 [2024-11-20 07:28:29.993828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.804 [2024-11-20 07:28:29.993836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.804 [2024-11-20 07:28:29.993854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.804 qpair failed and we were unable to recover it. 00:30:07.804 [2024-11-20 07:28:30.003827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.804 [2024-11-20 07:28:30.003947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.804 [2024-11-20 07:28:30.003973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.804 [2024-11-20 07:28:30.003989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.804 [2024-11-20 07:28:30.003996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.804 [2024-11-20 07:28:30.004017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.804 qpair failed and we were unable to recover it. 00:30:07.804 [2024-11-20 07:28:30.013808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.804 [2024-11-20 07:28:30.013878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.804 [2024-11-20 07:28:30.013896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.804 [2024-11-20 07:28:30.013905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.804 [2024-11-20 07:28:30.013912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.804 [2024-11-20 07:28:30.013931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.804 qpair failed and we were unable to recover it. 00:30:07.804 [2024-11-20 07:28:30.023844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.804 [2024-11-20 07:28:30.023915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.804 [2024-11-20 07:28:30.023934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.804 [2024-11-20 07:28:30.023942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.804 [2024-11-20 07:28:30.023950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.804 [2024-11-20 07:28:30.023969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.804 qpair failed and we were unable to recover it. 00:30:07.804 [2024-11-20 07:28:30.033868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.804 [2024-11-20 07:28:30.033930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.804 [2024-11-20 07:28:30.033947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.804 [2024-11-20 07:28:30.033958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.804 [2024-11-20 07:28:30.033966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.805 [2024-11-20 07:28:30.033985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.805 qpair failed and we were unable to recover it. 00:30:07.805 [2024-11-20 07:28:30.043873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.805 [2024-11-20 07:28:30.043930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.805 [2024-11-20 07:28:30.043948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.805 [2024-11-20 07:28:30.043956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.805 [2024-11-20 07:28:30.043964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.805 [2024-11-20 07:28:30.043988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.805 qpair failed and we were unable to recover it. 00:30:07.805 [2024-11-20 07:28:30.053955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.805 [2024-11-20 07:28:30.054027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.805 [2024-11-20 07:28:30.054046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.805 [2024-11-20 07:28:30.054056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.805 [2024-11-20 07:28:30.054063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.805 [2024-11-20 07:28:30.054083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.805 qpair failed and we were unable to recover it. 00:30:07.805 [2024-11-20 07:28:30.063995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.805 [2024-11-20 07:28:30.064063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.805 [2024-11-20 07:28:30.064082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.805 [2024-11-20 07:28:30.064091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.805 [2024-11-20 07:28:30.064097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.805 [2024-11-20 07:28:30.064116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.805 qpair failed and we were unable to recover it. 00:30:07.805 [2024-11-20 07:28:30.073988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.805 [2024-11-20 07:28:30.074050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.805 [2024-11-20 07:28:30.074068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.805 [2024-11-20 07:28:30.074077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.805 [2024-11-20 07:28:30.074084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:07.805 [2024-11-20 07:28:30.074103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.805 qpair failed and we were unable to recover it. 00:30:08.068 [2024-11-20 07:28:30.084017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.068 [2024-11-20 07:28:30.084084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.068 [2024-11-20 07:28:30.084103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.068 [2024-11-20 07:28:30.084112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.068 [2024-11-20 07:28:30.084119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.068 [2024-11-20 07:28:30.084137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.068 qpair failed and we were unable to recover it. 00:30:08.068 [2024-11-20 07:28:30.094033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.068 [2024-11-20 07:28:30.094100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.069 [2024-11-20 07:28:30.094117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.069 [2024-11-20 07:28:30.094125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.069 [2024-11-20 07:28:30.094133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.069 [2024-11-20 07:28:30.094153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.069 qpair failed and we were unable to recover it. 00:30:08.069 [2024-11-20 07:28:30.104114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.069 [2024-11-20 07:28:30.104196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.069 [2024-11-20 07:28:30.104213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.069 [2024-11-20 07:28:30.104221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.069 [2024-11-20 07:28:30.104228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.069 [2024-11-20 07:28:30.104248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.069 qpair failed and we were unable to recover it. 00:30:08.069 [2024-11-20 07:28:30.114122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.069 [2024-11-20 07:28:30.114217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.069 [2024-11-20 07:28:30.114235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.069 [2024-11-20 07:28:30.114244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.069 [2024-11-20 07:28:30.114251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.069 [2024-11-20 07:28:30.114269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.069 qpair failed and we were unable to recover it. 00:30:08.069 [2024-11-20 07:28:30.124186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.069 [2024-11-20 07:28:30.124256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.069 [2024-11-20 07:28:30.124274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.069 [2024-11-20 07:28:30.124281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.069 [2024-11-20 07:28:30.124288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.069 [2024-11-20 07:28:30.124308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.069 qpair failed and we were unable to recover it. 00:30:08.069 [2024-11-20 07:28:30.134294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.069 [2024-11-20 07:28:30.134375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.069 [2024-11-20 07:28:30.134398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.069 [2024-11-20 07:28:30.134407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.069 [2024-11-20 07:28:30.134414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.069 [2024-11-20 07:28:30.134433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.069 qpair failed and we were unable to recover it. 00:30:08.069 [2024-11-20 07:28:30.144259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.069 [2024-11-20 07:28:30.144374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.069 [2024-11-20 07:28:30.144394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.069 [2024-11-20 07:28:30.144402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.069 [2024-11-20 07:28:30.144410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.069 [2024-11-20 07:28:30.144429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.069 qpair failed and we were unable to recover it. 00:30:08.069 [2024-11-20 07:28:30.154263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.069 [2024-11-20 07:28:30.154334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.069 [2024-11-20 07:28:30.154353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.069 [2024-11-20 07:28:30.154361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.069 [2024-11-20 07:28:30.154368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.069 [2024-11-20 07:28:30.154387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.069 qpair failed and we were unable to recover it. 00:30:08.069 [2024-11-20 07:28:30.164266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.069 [2024-11-20 07:28:30.164330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.069 [2024-11-20 07:28:30.164347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.069 [2024-11-20 07:28:30.164355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.069 [2024-11-20 07:28:30.164363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.069 [2024-11-20 07:28:30.164381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.069 qpair failed and we were unable to recover it. 00:30:08.069 [2024-11-20 07:28:30.174374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.069 [2024-11-20 07:28:30.174443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.069 [2024-11-20 07:28:30.174460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.069 [2024-11-20 07:28:30.174469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.069 [2024-11-20 07:28:30.174481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.069 [2024-11-20 07:28:30.174499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.069 qpair failed and we were unable to recover it. 00:30:08.069 [2024-11-20 07:28:30.184383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.069 [2024-11-20 07:28:30.184464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.069 [2024-11-20 07:28:30.184482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.069 [2024-11-20 07:28:30.184490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.069 [2024-11-20 07:28:30.184497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.069 [2024-11-20 07:28:30.184517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.069 qpair failed and we were unable to recover it. 00:30:08.069 [2024-11-20 07:28:30.194374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.069 [2024-11-20 07:28:30.194434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.069 [2024-11-20 07:28:30.194451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.069 [2024-11-20 07:28:30.194459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.069 [2024-11-20 07:28:30.194467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.069 [2024-11-20 07:28:30.194485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.069 qpair failed and we were unable to recover it. 00:30:08.069 [2024-11-20 07:28:30.204415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.069 [2024-11-20 07:28:30.204474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.069 [2024-11-20 07:28:30.204494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.069 [2024-11-20 07:28:30.204502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.069 [2024-11-20 07:28:30.204511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.069 [2024-11-20 07:28:30.204530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.069 qpair failed and we were unable to recover it. 00:30:08.069 [2024-11-20 07:28:30.214450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.069 [2024-11-20 07:28:30.214525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.069 [2024-11-20 07:28:30.214542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.069 [2024-11-20 07:28:30.214551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.069 [2024-11-20 07:28:30.214558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.069 [2024-11-20 07:28:30.214576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.069 qpair failed and we were unable to recover it. 00:30:08.069 [2024-11-20 07:28:30.224504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.070 [2024-11-20 07:28:30.224578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.070 [2024-11-20 07:28:30.224595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.070 [2024-11-20 07:28:30.224603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.070 [2024-11-20 07:28:30.224610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.070 [2024-11-20 07:28:30.224629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.070 qpair failed and we were unable to recover it. 00:30:08.070 [2024-11-20 07:28:30.234500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.070 [2024-11-20 07:28:30.234566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.070 [2024-11-20 07:28:30.234584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.070 [2024-11-20 07:28:30.234593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.070 [2024-11-20 07:28:30.234600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.070 [2024-11-20 07:28:30.234618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.070 qpair failed and we were unable to recover it. 00:30:08.070 [2024-11-20 07:28:30.244392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.070 [2024-11-20 07:28:30.244453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.070 [2024-11-20 07:28:30.244473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.070 [2024-11-20 07:28:30.244483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.070 [2024-11-20 07:28:30.244491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.070 [2024-11-20 07:28:30.244511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.070 qpair failed and we were unable to recover it. 00:30:08.070 [2024-11-20 07:28:30.254551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.070 [2024-11-20 07:28:30.254615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.070 [2024-11-20 07:28:30.254633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.070 [2024-11-20 07:28:30.254641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.070 [2024-11-20 07:28:30.254649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.070 [2024-11-20 07:28:30.254667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.070 qpair failed and we were unable to recover it. 00:30:08.070 [2024-11-20 07:28:30.264573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.070 [2024-11-20 07:28:30.264644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.070 [2024-11-20 07:28:30.264666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.070 [2024-11-20 07:28:30.264675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.070 [2024-11-20 07:28:30.264682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.070 [2024-11-20 07:28:30.264700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.070 qpair failed and we were unable to recover it. 00:30:08.070 [2024-11-20 07:28:30.274556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.070 [2024-11-20 07:28:30.274617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.070 [2024-11-20 07:28:30.274634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.070 [2024-11-20 07:28:30.274643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.070 [2024-11-20 07:28:30.274650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.070 [2024-11-20 07:28:30.274668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.070 qpair failed and we were unable to recover it. 00:30:08.070 [2024-11-20 07:28:30.284571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.070 [2024-11-20 07:28:30.284629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.070 [2024-11-20 07:28:30.284645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.070 [2024-11-20 07:28:30.284653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.070 [2024-11-20 07:28:30.284661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.070 [2024-11-20 07:28:30.284678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.070 qpair failed and we were unable to recover it. 00:30:08.070 [2024-11-20 07:28:30.294637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.070 [2024-11-20 07:28:30.294704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.070 [2024-11-20 07:28:30.294720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.070 [2024-11-20 07:28:30.294728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.070 [2024-11-20 07:28:30.294736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.070 [2024-11-20 07:28:30.294753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.070 qpair failed and we were unable to recover it. 00:30:08.070 [2024-11-20 07:28:30.304656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.070 [2024-11-20 07:28:30.304724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.070 [2024-11-20 07:28:30.304739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.070 [2024-11-20 07:28:30.304747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.070 [2024-11-20 07:28:30.304758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.070 [2024-11-20 07:28:30.304775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.070 qpair failed and we were unable to recover it. 00:30:08.070 [2024-11-20 07:28:30.314643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.070 [2024-11-20 07:28:30.314749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.070 [2024-11-20 07:28:30.314764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.070 [2024-11-20 07:28:30.314772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.070 [2024-11-20 07:28:30.314779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.070 [2024-11-20 07:28:30.314796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.070 qpair failed and we were unable to recover it. 00:30:08.070 [2024-11-20 07:28:30.324672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.070 [2024-11-20 07:28:30.324734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.070 [2024-11-20 07:28:30.324750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.070 [2024-11-20 07:28:30.324757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.070 [2024-11-20 07:28:30.324764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.070 [2024-11-20 07:28:30.324781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.070 qpair failed and we were unable to recover it. 00:30:08.070 [2024-11-20 07:28:30.334656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.070 [2024-11-20 07:28:30.334729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.070 [2024-11-20 07:28:30.334744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.070 [2024-11-20 07:28:30.334752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.070 [2024-11-20 07:28:30.334759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.070 [2024-11-20 07:28:30.334775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.070 qpair failed and we were unable to recover it. 00:30:08.334 [2024-11-20 07:28:30.344786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.334 [2024-11-20 07:28:30.344847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.334 [2024-11-20 07:28:30.344863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.334 [2024-11-20 07:28:30.344871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.334 [2024-11-20 07:28:30.344878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.334 [2024-11-20 07:28:30.344894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.334 qpair failed and we were unable to recover it. 00:30:08.334 [2024-11-20 07:28:30.354732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.334 [2024-11-20 07:28:30.354789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.334 [2024-11-20 07:28:30.354803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.334 [2024-11-20 07:28:30.354811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.334 [2024-11-20 07:28:30.354818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.334 [2024-11-20 07:28:30.354834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.334 qpair failed and we were unable to recover it. 00:30:08.334 [2024-11-20 07:28:30.364796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.334 [2024-11-20 07:28:30.364844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.334 [2024-11-20 07:28:30.364859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.334 [2024-11-20 07:28:30.364867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.334 [2024-11-20 07:28:30.364873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.334 [2024-11-20 07:28:30.364889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.334 qpair failed and we were unable to recover it. 00:30:08.334 [2024-11-20 07:28:30.374798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.334 [2024-11-20 07:28:30.374859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.334 [2024-11-20 07:28:30.374873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.334 [2024-11-20 07:28:30.374881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.334 [2024-11-20 07:28:30.374889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.334 [2024-11-20 07:28:30.374907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.334 qpair failed and we were unable to recover it. 00:30:08.334 [2024-11-20 07:28:30.384937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.334 [2024-11-20 07:28:30.384999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.334 [2024-11-20 07:28:30.385013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.334 [2024-11-20 07:28:30.385021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.334 [2024-11-20 07:28:30.385027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.334 [2024-11-20 07:28:30.385043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.334 qpair failed and we were unable to recover it. 00:30:08.334 [2024-11-20 07:28:30.394863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.334 [2024-11-20 07:28:30.394925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.334 [2024-11-20 07:28:30.394952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.334 [2024-11-20 07:28:30.394962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.334 [2024-11-20 07:28:30.394970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.334 [2024-11-20 07:28:30.394992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.334 qpair failed and we were unable to recover it. 00:30:08.334 [2024-11-20 07:28:30.404885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.334 [2024-11-20 07:28:30.404942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.335 [2024-11-20 07:28:30.404969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.335 [2024-11-20 07:28:30.404978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.335 [2024-11-20 07:28:30.404986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.335 [2024-11-20 07:28:30.405007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.335 qpair failed and we were unable to recover it. 00:30:08.335 [2024-11-20 07:28:30.414959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.335 [2024-11-20 07:28:30.415019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.335 [2024-11-20 07:28:30.415035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.335 [2024-11-20 07:28:30.415043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.335 [2024-11-20 07:28:30.415050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.335 [2024-11-20 07:28:30.415067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.335 qpair failed and we were unable to recover it. 00:30:08.335 [2024-11-20 07:28:30.424994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.335 [2024-11-20 07:28:30.425046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.335 [2024-11-20 07:28:30.425060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.335 [2024-11-20 07:28:30.425068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.335 [2024-11-20 07:28:30.425074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.335 [2024-11-20 07:28:30.425091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.335 qpair failed and we were unable to recover it. 00:30:08.335 [2024-11-20 07:28:30.434977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.335 [2024-11-20 07:28:30.435025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.335 [2024-11-20 07:28:30.435039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.335 [2024-11-20 07:28:30.435051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.335 [2024-11-20 07:28:30.435058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.335 [2024-11-20 07:28:30.435074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.335 qpair failed and we were unable to recover it. 00:30:08.335 [2024-11-20 07:28:30.444995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.335 [2024-11-20 07:28:30.445049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.335 [2024-11-20 07:28:30.445063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.335 [2024-11-20 07:28:30.445071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.335 [2024-11-20 07:28:30.445078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.335 [2024-11-20 07:28:30.445093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.335 qpair failed and we were unable to recover it. 00:30:08.335 [2024-11-20 07:28:30.455031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.335 [2024-11-20 07:28:30.455087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.335 [2024-11-20 07:28:30.455100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.335 [2024-11-20 07:28:30.455108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.335 [2024-11-20 07:28:30.455115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.335 [2024-11-20 07:28:30.455130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.335 qpair failed and we were unable to recover it. 00:30:08.335 [2024-11-20 07:28:30.465093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.335 [2024-11-20 07:28:30.465151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.335 [2024-11-20 07:28:30.465169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.335 [2024-11-20 07:28:30.465176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.335 [2024-11-20 07:28:30.465183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.335 [2024-11-20 07:28:30.465198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.335 qpair failed and we were unable to recover it. 00:30:08.335 [2024-11-20 07:28:30.475071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.335 [2024-11-20 07:28:30.475119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.335 [2024-11-20 07:28:30.475133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.335 [2024-11-20 07:28:30.475140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.335 [2024-11-20 07:28:30.475147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.335 [2024-11-20 07:28:30.475170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.335 qpair failed and we were unable to recover it. 00:30:08.335 [2024-11-20 07:28:30.485082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.335 [2024-11-20 07:28:30.485135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.335 [2024-11-20 07:28:30.485148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.335 [2024-11-20 07:28:30.485155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.335 [2024-11-20 07:28:30.485166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.335 [2024-11-20 07:28:30.485181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.335 qpair failed and we were unable to recover it. 00:30:08.335 [2024-11-20 07:28:30.495186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.335 [2024-11-20 07:28:30.495243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.335 [2024-11-20 07:28:30.495256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.335 [2024-11-20 07:28:30.495263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.335 [2024-11-20 07:28:30.495270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.335 [2024-11-20 07:28:30.495285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.335 qpair failed and we were unable to recover it. 00:30:08.335 [2024-11-20 07:28:30.505203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.335 [2024-11-20 07:28:30.505256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.335 [2024-11-20 07:28:30.505270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.335 [2024-11-20 07:28:30.505278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.335 [2024-11-20 07:28:30.505284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.335 [2024-11-20 07:28:30.505299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.335 qpair failed and we were unable to recover it. 00:30:08.335 [2024-11-20 07:28:30.515181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.335 [2024-11-20 07:28:30.515228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.335 [2024-11-20 07:28:30.515241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.335 [2024-11-20 07:28:30.515249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.335 [2024-11-20 07:28:30.515255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.335 [2024-11-20 07:28:30.515270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.335 qpair failed and we were unable to recover it. 00:30:08.335 [2024-11-20 07:28:30.525184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.335 [2024-11-20 07:28:30.525244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.335 [2024-11-20 07:28:30.525258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.335 [2024-11-20 07:28:30.525265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.335 [2024-11-20 07:28:30.525271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.335 [2024-11-20 07:28:30.525287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.335 qpair failed and we were unable to recover it. 00:30:08.335 [2024-11-20 07:28:30.535206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.335 [2024-11-20 07:28:30.535292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.336 [2024-11-20 07:28:30.535307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.336 [2024-11-20 07:28:30.535314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.336 [2024-11-20 07:28:30.535321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.336 [2024-11-20 07:28:30.535337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.336 qpair failed and we were unable to recover it. 00:30:08.336 [2024-11-20 07:28:30.545192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.336 [2024-11-20 07:28:30.545251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.336 [2024-11-20 07:28:30.545266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.336 [2024-11-20 07:28:30.545273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.336 [2024-11-20 07:28:30.545280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.336 [2024-11-20 07:28:30.545295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.336 qpair failed and we were unable to recover it. 00:30:08.336 [2024-11-20 07:28:30.555286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.336 [2024-11-20 07:28:30.555351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.336 [2024-11-20 07:28:30.555364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.336 [2024-11-20 07:28:30.555371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.336 [2024-11-20 07:28:30.555378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.336 [2024-11-20 07:28:30.555393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.336 qpair failed and we were unable to recover it. 00:30:08.336 [2024-11-20 07:28:30.565304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.336 [2024-11-20 07:28:30.565354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.336 [2024-11-20 07:28:30.565371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.336 [2024-11-20 07:28:30.565378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.336 [2024-11-20 07:28:30.565385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.336 [2024-11-20 07:28:30.565400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.336 qpair failed and we were unable to recover it. 00:30:08.336 [2024-11-20 07:28:30.575371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.336 [2024-11-20 07:28:30.575464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.336 [2024-11-20 07:28:30.575478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.336 [2024-11-20 07:28:30.575485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.336 [2024-11-20 07:28:30.575492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.336 [2024-11-20 07:28:30.575507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.336 qpair failed and we were unable to recover it. 00:30:08.336 [2024-11-20 07:28:30.585463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.336 [2024-11-20 07:28:30.585550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.336 [2024-11-20 07:28:30.585563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.336 [2024-11-20 07:28:30.585571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.336 [2024-11-20 07:28:30.585577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.336 [2024-11-20 07:28:30.585592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.336 qpair failed and we were unable to recover it. 00:30:08.336 [2024-11-20 07:28:30.595378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.336 [2024-11-20 07:28:30.595421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.336 [2024-11-20 07:28:30.595434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.336 [2024-11-20 07:28:30.595441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.336 [2024-11-20 07:28:30.595448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.336 [2024-11-20 07:28:30.595462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.336 qpair failed and we were unable to recover it. 00:30:08.336 [2024-11-20 07:28:30.605427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.336 [2024-11-20 07:28:30.605516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.336 [2024-11-20 07:28:30.605529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.336 [2024-11-20 07:28:30.605537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.336 [2024-11-20 07:28:30.605544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.336 [2024-11-20 07:28:30.605565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.336 qpair failed and we were unable to recover it. 00:30:08.599 [2024-11-20 07:28:30.615511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.599 [2024-11-20 07:28:30.615579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.599 [2024-11-20 07:28:30.615592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.599 [2024-11-20 07:28:30.615599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.599 [2024-11-20 07:28:30.615606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.599 [2024-11-20 07:28:30.615621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.599 qpair failed and we were unable to recover it. 00:30:08.599 [2024-11-20 07:28:30.625538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.599 [2024-11-20 07:28:30.625633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.599 [2024-11-20 07:28:30.625647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.599 [2024-11-20 07:28:30.625654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.599 [2024-11-20 07:28:30.625661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.599 [2024-11-20 07:28:30.625676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.599 qpair failed and we were unable to recover it. 00:30:08.599 [2024-11-20 07:28:30.635508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.599 [2024-11-20 07:28:30.635556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.599 [2024-11-20 07:28:30.635569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.599 [2024-11-20 07:28:30.635576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.599 [2024-11-20 07:28:30.635583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.599 [2024-11-20 07:28:30.635597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.599 qpair failed and we were unable to recover it. 00:30:08.599 [2024-11-20 07:28:30.645518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.599 [2024-11-20 07:28:30.645572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.599 [2024-11-20 07:28:30.645586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.599 [2024-11-20 07:28:30.645593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.599 [2024-11-20 07:28:30.645600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.599 [2024-11-20 07:28:30.645615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.599 qpair failed and we were unable to recover it. 00:30:08.599 [2024-11-20 07:28:30.655601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.599 [2024-11-20 07:28:30.655700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.599 [2024-11-20 07:28:30.655714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.599 [2024-11-20 07:28:30.655721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.599 [2024-11-20 07:28:30.655728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.599 [2024-11-20 07:28:30.655743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.599 qpair failed and we were unable to recover it. 00:30:08.599 [2024-11-20 07:28:30.665617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.599 [2024-11-20 07:28:30.665680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.599 [2024-11-20 07:28:30.665693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.599 [2024-11-20 07:28:30.665701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.599 [2024-11-20 07:28:30.665708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.599 [2024-11-20 07:28:30.665723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.599 qpair failed and we were unable to recover it. 00:30:08.599 [2024-11-20 07:28:30.675606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.599 [2024-11-20 07:28:30.675667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.599 [2024-11-20 07:28:30.675680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.599 [2024-11-20 07:28:30.675687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.599 [2024-11-20 07:28:30.675694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.599 [2024-11-20 07:28:30.675709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.600 qpair failed and we were unable to recover it. 00:30:08.600 [2024-11-20 07:28:30.685592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.600 [2024-11-20 07:28:30.685643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.600 [2024-11-20 07:28:30.685656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.600 [2024-11-20 07:28:30.685663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.600 [2024-11-20 07:28:30.685670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.600 [2024-11-20 07:28:30.685684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.600 qpair failed and we were unable to recover it. 00:30:08.600 [2024-11-20 07:28:30.695688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.600 [2024-11-20 07:28:30.695741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.600 [2024-11-20 07:28:30.695757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.600 [2024-11-20 07:28:30.695764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.600 [2024-11-20 07:28:30.695770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.600 [2024-11-20 07:28:30.695785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.600 qpair failed and we were unable to recover it. 00:30:08.600 [2024-11-20 07:28:30.705748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.600 [2024-11-20 07:28:30.705801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.600 [2024-11-20 07:28:30.705815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.600 [2024-11-20 07:28:30.705822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.600 [2024-11-20 07:28:30.705828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.600 [2024-11-20 07:28:30.705843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.600 qpair failed and we were unable to recover it. 00:30:08.600 [2024-11-20 07:28:30.715716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.600 [2024-11-20 07:28:30.715768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.600 [2024-11-20 07:28:30.715781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.600 [2024-11-20 07:28:30.715789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.600 [2024-11-20 07:28:30.715795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.600 [2024-11-20 07:28:30.715809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.600 qpair failed and we were unable to recover it. 00:30:08.600 [2024-11-20 07:28:30.725732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.600 [2024-11-20 07:28:30.725782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.600 [2024-11-20 07:28:30.725795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.600 [2024-11-20 07:28:30.725802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.600 [2024-11-20 07:28:30.725809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.600 [2024-11-20 07:28:30.725823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.600 qpair failed and we were unable to recover it. 00:30:08.600 [2024-11-20 07:28:30.735817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.600 [2024-11-20 07:28:30.735870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.600 [2024-11-20 07:28:30.735883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.600 [2024-11-20 07:28:30.735890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.600 [2024-11-20 07:28:30.735900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.600 [2024-11-20 07:28:30.735914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.600 qpair failed and we were unable to recover it. 00:30:08.600 [2024-11-20 07:28:30.745841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.600 [2024-11-20 07:28:30.745900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.600 [2024-11-20 07:28:30.745914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.600 [2024-11-20 07:28:30.745922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.600 [2024-11-20 07:28:30.745928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.600 [2024-11-20 07:28:30.745946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.600 qpair failed and we were unable to recover it. 00:30:08.600 [2024-11-20 07:28:30.755832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.600 [2024-11-20 07:28:30.755881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.600 [2024-11-20 07:28:30.755896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.600 [2024-11-20 07:28:30.755903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.600 [2024-11-20 07:28:30.755910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.600 [2024-11-20 07:28:30.755925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.600 qpair failed and we were unable to recover it. 00:30:08.600 [2024-11-20 07:28:30.765744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.600 [2024-11-20 07:28:30.765835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.600 [2024-11-20 07:28:30.765849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.600 [2024-11-20 07:28:30.765857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.600 [2024-11-20 07:28:30.765863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.600 [2024-11-20 07:28:30.765878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.600 qpair failed and we were unable to recover it. 00:30:08.600 [2024-11-20 07:28:30.775795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.600 [2024-11-20 07:28:30.775849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.600 [2024-11-20 07:28:30.775863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.600 [2024-11-20 07:28:30.775870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.600 [2024-11-20 07:28:30.775876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.600 [2024-11-20 07:28:30.775897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.600 qpair failed and we were unable to recover it. 00:30:08.600 [2024-11-20 07:28:30.785973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.600 [2024-11-20 07:28:30.786030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.600 [2024-11-20 07:28:30.786044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.600 [2024-11-20 07:28:30.786051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.600 [2024-11-20 07:28:30.786058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.600 [2024-11-20 07:28:30.786073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.600 qpair failed and we were unable to recover it. 00:30:08.600 [2024-11-20 07:28:30.795928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.600 [2024-11-20 07:28:30.795975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.600 [2024-11-20 07:28:30.795989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.600 [2024-11-20 07:28:30.795996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.601 [2024-11-20 07:28:30.796002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.601 [2024-11-20 07:28:30.796017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.601 qpair failed and we were unable to recover it. 00:30:08.601 [2024-11-20 07:28:30.805834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.601 [2024-11-20 07:28:30.805879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.601 [2024-11-20 07:28:30.805892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.601 [2024-11-20 07:28:30.805899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.601 [2024-11-20 07:28:30.805906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.601 [2024-11-20 07:28:30.805920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.601 qpair failed and we were unable to recover it. 00:30:08.601 [2024-11-20 07:28:30.815908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.601 [2024-11-20 07:28:30.815963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.601 [2024-11-20 07:28:30.815978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.601 [2024-11-20 07:28:30.815990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.601 [2024-11-20 07:28:30.815997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.601 [2024-11-20 07:28:30.816013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.601 qpair failed and we were unable to recover it. 00:30:08.601 [2024-11-20 07:28:30.826061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.601 [2024-11-20 07:28:30.826131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.601 [2024-11-20 07:28:30.826148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.601 [2024-11-20 07:28:30.826155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.601 [2024-11-20 07:28:30.826166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.601 [2024-11-20 07:28:30.826181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.601 qpair failed and we were unable to recover it. 00:30:08.601 [2024-11-20 07:28:30.836030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.601 [2024-11-20 07:28:30.836082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.601 [2024-11-20 07:28:30.836095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.601 [2024-11-20 07:28:30.836102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.601 [2024-11-20 07:28:30.836108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.601 [2024-11-20 07:28:30.836123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.601 qpair failed and we were unable to recover it. 00:30:08.601 [2024-11-20 07:28:30.846066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.601 [2024-11-20 07:28:30.846111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.601 [2024-11-20 07:28:30.846125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.601 [2024-11-20 07:28:30.846132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.601 [2024-11-20 07:28:30.846139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.601 [2024-11-20 07:28:30.846153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.601 qpair failed and we were unable to recover it. 00:30:08.601 [2024-11-20 07:28:30.856148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.601 [2024-11-20 07:28:30.856210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.601 [2024-11-20 07:28:30.856224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.601 [2024-11-20 07:28:30.856231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.601 [2024-11-20 07:28:30.856238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.601 [2024-11-20 07:28:30.856252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.601 qpair failed and we were unable to recover it. 00:30:08.601 [2024-11-20 07:28:30.866134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.601 [2024-11-20 07:28:30.866200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.601 [2024-11-20 07:28:30.866213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.601 [2024-11-20 07:28:30.866224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.601 [2024-11-20 07:28:30.866230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.601 [2024-11-20 07:28:30.866245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.601 qpair failed and we were unable to recover it. 00:30:08.863 [2024-11-20 07:28:30.876153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.863 [2024-11-20 07:28:30.876207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.863 [2024-11-20 07:28:30.876220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.864 [2024-11-20 07:28:30.876228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.864 [2024-11-20 07:28:30.876234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.864 [2024-11-20 07:28:30.876249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-11-20 07:28:30.886053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.864 [2024-11-20 07:28:30.886104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.864 [2024-11-20 07:28:30.886117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.864 [2024-11-20 07:28:30.886125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.864 [2024-11-20 07:28:30.886131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.864 [2024-11-20 07:28:30.886146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-11-20 07:28:30.896249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.864 [2024-11-20 07:28:30.896302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.864 [2024-11-20 07:28:30.896315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.864 [2024-11-20 07:28:30.896323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.864 [2024-11-20 07:28:30.896330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.864 [2024-11-20 07:28:30.896345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-11-20 07:28:30.906260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.864 [2024-11-20 07:28:30.906334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.864 [2024-11-20 07:28:30.906348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.864 [2024-11-20 07:28:30.906355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.864 [2024-11-20 07:28:30.906361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.864 [2024-11-20 07:28:30.906377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-11-20 07:28:30.916184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.864 [2024-11-20 07:28:30.916232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.864 [2024-11-20 07:28:30.916245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.864 [2024-11-20 07:28:30.916253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.864 [2024-11-20 07:28:30.916259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.864 [2024-11-20 07:28:30.916274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-11-20 07:28:30.926287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.864 [2024-11-20 07:28:30.926336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.864 [2024-11-20 07:28:30.926349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.864 [2024-11-20 07:28:30.926357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.864 [2024-11-20 07:28:30.926363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.864 [2024-11-20 07:28:30.926377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-11-20 07:28:30.936384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.864 [2024-11-20 07:28:30.936437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.864 [2024-11-20 07:28:30.936450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.864 [2024-11-20 07:28:30.936457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.864 [2024-11-20 07:28:30.936464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.864 [2024-11-20 07:28:30.936479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-11-20 07:28:30.946407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.864 [2024-11-20 07:28:30.946461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.864 [2024-11-20 07:28:30.946474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.864 [2024-11-20 07:28:30.946481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.864 [2024-11-20 07:28:30.946488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.864 [2024-11-20 07:28:30.946502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-11-20 07:28:30.956380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.864 [2024-11-20 07:28:30.956431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.864 [2024-11-20 07:28:30.956444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.864 [2024-11-20 07:28:30.956452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.864 [2024-11-20 07:28:30.956458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.864 [2024-11-20 07:28:30.956473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-11-20 07:28:30.966402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.864 [2024-11-20 07:28:30.966447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.864 [2024-11-20 07:28:30.966460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.864 [2024-11-20 07:28:30.966467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.864 [2024-11-20 07:28:30.966473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.864 [2024-11-20 07:28:30.966488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-11-20 07:28:30.976484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.864 [2024-11-20 07:28:30.976557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.864 [2024-11-20 07:28:30.976570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.864 [2024-11-20 07:28:30.976577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.864 [2024-11-20 07:28:30.976583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.864 [2024-11-20 07:28:30.976597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.864 qpair failed and we were unable to recover it. 00:30:08.864 [2024-11-20 07:28:30.986551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.864 [2024-11-20 07:28:30.986606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.865 [2024-11-20 07:28:30.986619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.865 [2024-11-20 07:28:30.986627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.865 [2024-11-20 07:28:30.986633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.865 [2024-11-20 07:28:30.986647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.865 qpair failed and we were unable to recover it. 00:30:08.865 [2024-11-20 07:28:30.996483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.865 [2024-11-20 07:28:30.996534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.865 [2024-11-20 07:28:30.996547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.865 [2024-11-20 07:28:30.996557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.865 [2024-11-20 07:28:30.996564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.865 [2024-11-20 07:28:30.996578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.865 qpair failed and we were unable to recover it. 00:30:08.865 [2024-11-20 07:28:31.006459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.865 [2024-11-20 07:28:31.006503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.865 [2024-11-20 07:28:31.006517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.865 [2024-11-20 07:28:31.006524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.865 [2024-11-20 07:28:31.006530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.865 [2024-11-20 07:28:31.006545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.865 qpair failed and we were unable to recover it. 00:30:08.865 [2024-11-20 07:28:31.016579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.865 [2024-11-20 07:28:31.016638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.865 [2024-11-20 07:28:31.016651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.865 [2024-11-20 07:28:31.016658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.865 [2024-11-20 07:28:31.016665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.865 [2024-11-20 07:28:31.016680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.865 qpair failed and we were unable to recover it. 00:30:08.865 [2024-11-20 07:28:31.026576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.865 [2024-11-20 07:28:31.026635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.865 [2024-11-20 07:28:31.026649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.865 [2024-11-20 07:28:31.026656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.865 [2024-11-20 07:28:31.026662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.865 [2024-11-20 07:28:31.026677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.865 qpair failed and we were unable to recover it. 00:30:08.865 [2024-11-20 07:28:31.036640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.865 [2024-11-20 07:28:31.036714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.865 [2024-11-20 07:28:31.036727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.865 [2024-11-20 07:28:31.036734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.865 [2024-11-20 07:28:31.036741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.865 [2024-11-20 07:28:31.036759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.865 qpair failed and we were unable to recover it. 00:30:08.865 [2024-11-20 07:28:31.046627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.865 [2024-11-20 07:28:31.046687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.865 [2024-11-20 07:28:31.046700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.865 [2024-11-20 07:28:31.046707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.865 [2024-11-20 07:28:31.046714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.865 [2024-11-20 07:28:31.046728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.865 qpair failed and we were unable to recover it. 00:30:08.865 [2024-11-20 07:28:31.056652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.865 [2024-11-20 07:28:31.056710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.865 [2024-11-20 07:28:31.056723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.865 [2024-11-20 07:28:31.056730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.865 [2024-11-20 07:28:31.056736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.865 [2024-11-20 07:28:31.056751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.865 qpair failed and we were unable to recover it. 00:30:08.865 [2024-11-20 07:28:31.066743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.865 [2024-11-20 07:28:31.066815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.865 [2024-11-20 07:28:31.066829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.865 [2024-11-20 07:28:31.066836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.865 [2024-11-20 07:28:31.066842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.865 [2024-11-20 07:28:31.066858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.865 qpair failed and we were unable to recover it. 00:30:08.865 [2024-11-20 07:28:31.076702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.865 [2024-11-20 07:28:31.076786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.865 [2024-11-20 07:28:31.076800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.865 [2024-11-20 07:28:31.076808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.865 [2024-11-20 07:28:31.076815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.865 [2024-11-20 07:28:31.076829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.865 qpair failed and we were unable to recover it. 00:30:08.865 [2024-11-20 07:28:31.086704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.865 [2024-11-20 07:28:31.086755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.865 [2024-11-20 07:28:31.086769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.865 [2024-11-20 07:28:31.086776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.865 [2024-11-20 07:28:31.086783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.865 [2024-11-20 07:28:31.086798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.865 qpair failed and we were unable to recover it. 00:30:08.865 [2024-11-20 07:28:31.096780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.865 [2024-11-20 07:28:31.096837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.865 [2024-11-20 07:28:31.096850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.865 [2024-11-20 07:28:31.096857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.865 [2024-11-20 07:28:31.096864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.865 [2024-11-20 07:28:31.096879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.865 qpair failed and we were unable to recover it. 00:30:08.865 [2024-11-20 07:28:31.106853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.865 [2024-11-20 07:28:31.106916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.865 [2024-11-20 07:28:31.106940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.865 [2024-11-20 07:28:31.106949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.865 [2024-11-20 07:28:31.106956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.865 [2024-11-20 07:28:31.106977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.865 qpair failed and we were unable to recover it. 00:30:08.866 [2024-11-20 07:28:31.116809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.866 [2024-11-20 07:28:31.116860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.866 [2024-11-20 07:28:31.116875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.866 [2024-11-20 07:28:31.116882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.866 [2024-11-20 07:28:31.116889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.866 [2024-11-20 07:28:31.116905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.866 qpair failed and we were unable to recover it. 00:30:08.866 [2024-11-20 07:28:31.126836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.866 [2024-11-20 07:28:31.126892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.866 [2024-11-20 07:28:31.126911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.866 [2024-11-20 07:28:31.126918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.866 [2024-11-20 07:28:31.126924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:08.866 [2024-11-20 07:28:31.126940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.866 qpair failed and we were unable to recover it. 00:30:09.128 [2024-11-20 07:28:31.136914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.128 [2024-11-20 07:28:31.136968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.128 [2024-11-20 07:28:31.136982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.128 [2024-11-20 07:28:31.136989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.128 [2024-11-20 07:28:31.136996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.128 [2024-11-20 07:28:31.137011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.128 qpair failed and we were unable to recover it. 00:30:09.128 [2024-11-20 07:28:31.146894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.128 [2024-11-20 07:28:31.146953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.128 [2024-11-20 07:28:31.146967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.128 [2024-11-20 07:28:31.146974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.128 [2024-11-20 07:28:31.146981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.128 [2024-11-20 07:28:31.146996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.128 qpair failed and we were unable to recover it. 00:30:09.128 [2024-11-20 07:28:31.156906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.128 [2024-11-20 07:28:31.156955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.128 [2024-11-20 07:28:31.156968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.128 [2024-11-20 07:28:31.156976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.128 [2024-11-20 07:28:31.156982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.129 [2024-11-20 07:28:31.156996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.129 qpair failed and we were unable to recover it. 00:30:09.129 [2024-11-20 07:28:31.166919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.129 [2024-11-20 07:28:31.166964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.129 [2024-11-20 07:28:31.166978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.129 [2024-11-20 07:28:31.166986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.129 [2024-11-20 07:28:31.166992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.129 [2024-11-20 07:28:31.167010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.129 qpair failed and we were unable to recover it. 00:30:09.129 [2024-11-20 07:28:31.176971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.129 [2024-11-20 07:28:31.177030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.129 [2024-11-20 07:28:31.177044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.129 [2024-11-20 07:28:31.177051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.129 [2024-11-20 07:28:31.177058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.129 [2024-11-20 07:28:31.177074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.129 qpair failed and we were unable to recover it. 00:30:09.129 [2024-11-20 07:28:31.187021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.129 [2024-11-20 07:28:31.187083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.129 [2024-11-20 07:28:31.187096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.129 [2024-11-20 07:28:31.187103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.129 [2024-11-20 07:28:31.187109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.129 [2024-11-20 07:28:31.187124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.129 qpair failed and we were unable to recover it. 00:30:09.129 [2024-11-20 07:28:31.197021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.129 [2024-11-20 07:28:31.197066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.129 [2024-11-20 07:28:31.197079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.129 [2024-11-20 07:28:31.197086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.129 [2024-11-20 07:28:31.197093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.129 [2024-11-20 07:28:31.197107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.129 qpair failed and we were unable to recover it. 00:30:09.129 [2024-11-20 07:28:31.207046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.129 [2024-11-20 07:28:31.207089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.129 [2024-11-20 07:28:31.207102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.129 [2024-11-20 07:28:31.207109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.129 [2024-11-20 07:28:31.207116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.129 [2024-11-20 07:28:31.207130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.129 qpair failed and we were unable to recover it. 00:30:09.129 [2024-11-20 07:28:31.217122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.129 [2024-11-20 07:28:31.217219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.129 [2024-11-20 07:28:31.217233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.129 [2024-11-20 07:28:31.217241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.129 [2024-11-20 07:28:31.217247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.129 [2024-11-20 07:28:31.217262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.129 qpair failed and we were unable to recover it. 00:30:09.129 [2024-11-20 07:28:31.227175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.129 [2024-11-20 07:28:31.227236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.129 [2024-11-20 07:28:31.227248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.129 [2024-11-20 07:28:31.227256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.129 [2024-11-20 07:28:31.227262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.129 [2024-11-20 07:28:31.227277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.129 qpair failed and we were unable to recover it. 00:30:09.129 [2024-11-20 07:28:31.237114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.129 [2024-11-20 07:28:31.237163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.129 [2024-11-20 07:28:31.237176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.129 [2024-11-20 07:28:31.237183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.129 [2024-11-20 07:28:31.237190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.129 [2024-11-20 07:28:31.237205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.129 qpair failed and we were unable to recover it. 00:30:09.129 [2024-11-20 07:28:31.247170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.129 [2024-11-20 07:28:31.247217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.129 [2024-11-20 07:28:31.247230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.129 [2024-11-20 07:28:31.247238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.129 [2024-11-20 07:28:31.247244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.129 [2024-11-20 07:28:31.247259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.129 qpair failed and we were unable to recover it. 00:30:09.129 [2024-11-20 07:28:31.257251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.129 [2024-11-20 07:28:31.257304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.129 [2024-11-20 07:28:31.257320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.129 [2024-11-20 07:28:31.257327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.129 [2024-11-20 07:28:31.257334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.129 [2024-11-20 07:28:31.257348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.129 qpair failed and we were unable to recover it. 00:30:09.129 [2024-11-20 07:28:31.267258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.129 [2024-11-20 07:28:31.267309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.129 [2024-11-20 07:28:31.267322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.129 [2024-11-20 07:28:31.267329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.129 [2024-11-20 07:28:31.267336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.129 [2024-11-20 07:28:31.267351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.129 qpair failed and we were unable to recover it. 00:30:09.129 [2024-11-20 07:28:31.277204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.129 [2024-11-20 07:28:31.277249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.129 [2024-11-20 07:28:31.277262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.129 [2024-11-20 07:28:31.277270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.129 [2024-11-20 07:28:31.277276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.129 [2024-11-20 07:28:31.277291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.129 qpair failed and we were unable to recover it. 00:30:09.129 [2024-11-20 07:28:31.287228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.129 [2024-11-20 07:28:31.287277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.129 [2024-11-20 07:28:31.287290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.129 [2024-11-20 07:28:31.287298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.129 [2024-11-20 07:28:31.287304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.130 [2024-11-20 07:28:31.287319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.130 qpair failed and we were unable to recover it. 00:30:09.130 [2024-11-20 07:28:31.297301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.130 [2024-11-20 07:28:31.297356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.130 [2024-11-20 07:28:31.297370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.130 [2024-11-20 07:28:31.297377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.130 [2024-11-20 07:28:31.297390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.130 [2024-11-20 07:28:31.297405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.130 qpair failed and we were unable to recover it. 00:30:09.130 [2024-11-20 07:28:31.307367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.130 [2024-11-20 07:28:31.307423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.130 [2024-11-20 07:28:31.307437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.130 [2024-11-20 07:28:31.307445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.130 [2024-11-20 07:28:31.307451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.130 [2024-11-20 07:28:31.307466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.130 qpair failed and we were unable to recover it. 00:30:09.130 [2024-11-20 07:28:31.317346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.130 [2024-11-20 07:28:31.317390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.130 [2024-11-20 07:28:31.317403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.130 [2024-11-20 07:28:31.317410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.130 [2024-11-20 07:28:31.317417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.130 [2024-11-20 07:28:31.317431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.130 qpair failed and we were unable to recover it. 00:30:09.130 [2024-11-20 07:28:31.327251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.130 [2024-11-20 07:28:31.327296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.130 [2024-11-20 07:28:31.327309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.130 [2024-11-20 07:28:31.327317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.130 [2024-11-20 07:28:31.327323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.130 [2024-11-20 07:28:31.327338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.130 qpair failed and we were unable to recover it. 00:30:09.130 [2024-11-20 07:28:31.337459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.130 [2024-11-20 07:28:31.337516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.130 [2024-11-20 07:28:31.337529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.130 [2024-11-20 07:28:31.337536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.130 [2024-11-20 07:28:31.337542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.130 [2024-11-20 07:28:31.337557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.130 qpair failed and we were unable to recover it. 00:30:09.130 [2024-11-20 07:28:31.347483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.130 [2024-11-20 07:28:31.347539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.130 [2024-11-20 07:28:31.347552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.130 [2024-11-20 07:28:31.347560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.130 [2024-11-20 07:28:31.347567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.130 [2024-11-20 07:28:31.347581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.130 qpair failed and we were unable to recover it. 00:30:09.130 [2024-11-20 07:28:31.357474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.130 [2024-11-20 07:28:31.357521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.130 [2024-11-20 07:28:31.357534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.130 [2024-11-20 07:28:31.357542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.130 [2024-11-20 07:28:31.357549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.130 [2024-11-20 07:28:31.357563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.130 qpair failed and we were unable to recover it. 00:30:09.130 [2024-11-20 07:28:31.367409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.130 [2024-11-20 07:28:31.367485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.130 [2024-11-20 07:28:31.367499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.130 [2024-11-20 07:28:31.367506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.130 [2024-11-20 07:28:31.367513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.130 [2024-11-20 07:28:31.367528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.130 qpair failed and we were unable to recover it. 00:30:09.130 [2024-11-20 07:28:31.377580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.130 [2024-11-20 07:28:31.377632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.130 [2024-11-20 07:28:31.377645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.130 [2024-11-20 07:28:31.377653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.130 [2024-11-20 07:28:31.377659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.130 [2024-11-20 07:28:31.377674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.130 qpair failed and we were unable to recover it. 00:30:09.130 [2024-11-20 07:28:31.387615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.130 [2024-11-20 07:28:31.387672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.130 [2024-11-20 07:28:31.387688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.130 [2024-11-20 07:28:31.387695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.130 [2024-11-20 07:28:31.387702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.130 [2024-11-20 07:28:31.387716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.130 qpair failed and we were unable to recover it. 00:30:09.130 [2024-11-20 07:28:31.397551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.130 [2024-11-20 07:28:31.397600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.130 [2024-11-20 07:28:31.397613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.130 [2024-11-20 07:28:31.397620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.130 [2024-11-20 07:28:31.397627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.130 [2024-11-20 07:28:31.397641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.130 qpair failed and we were unable to recover it. 00:30:09.393 [2024-11-20 07:28:31.407604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.393 [2024-11-20 07:28:31.407652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.394 [2024-11-20 07:28:31.407665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.394 [2024-11-20 07:28:31.407673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.394 [2024-11-20 07:28:31.407679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.394 [2024-11-20 07:28:31.407694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-11-20 07:28:31.417587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.394 [2024-11-20 07:28:31.417643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.394 [2024-11-20 07:28:31.417656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.394 [2024-11-20 07:28:31.417663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.394 [2024-11-20 07:28:31.417669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.394 [2024-11-20 07:28:31.417683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-11-20 07:28:31.427663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.394 [2024-11-20 07:28:31.427714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.394 [2024-11-20 07:28:31.427727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.394 [2024-11-20 07:28:31.427738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.394 [2024-11-20 07:28:31.427745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.394 [2024-11-20 07:28:31.427759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-11-20 07:28:31.437666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.394 [2024-11-20 07:28:31.437763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.394 [2024-11-20 07:28:31.437777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.394 [2024-11-20 07:28:31.437785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.394 [2024-11-20 07:28:31.437792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.394 [2024-11-20 07:28:31.437806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-11-20 07:28:31.447712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.394 [2024-11-20 07:28:31.447782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.394 [2024-11-20 07:28:31.447795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.394 [2024-11-20 07:28:31.447803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.394 [2024-11-20 07:28:31.447809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.394 [2024-11-20 07:28:31.447825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-11-20 07:28:31.457683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.394 [2024-11-20 07:28:31.457777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.394 [2024-11-20 07:28:31.457792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.394 [2024-11-20 07:28:31.457799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.394 [2024-11-20 07:28:31.457806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.394 [2024-11-20 07:28:31.457825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-11-20 07:28:31.467733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.394 [2024-11-20 07:28:31.467811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.394 [2024-11-20 07:28:31.467825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.394 [2024-11-20 07:28:31.467832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.394 [2024-11-20 07:28:31.467839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.394 [2024-11-20 07:28:31.467854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-11-20 07:28:31.477756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.394 [2024-11-20 07:28:31.477801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.394 [2024-11-20 07:28:31.477814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.394 [2024-11-20 07:28:31.477822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.394 [2024-11-20 07:28:31.477828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.394 [2024-11-20 07:28:31.477843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-11-20 07:28:31.487698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.394 [2024-11-20 07:28:31.487748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.394 [2024-11-20 07:28:31.487763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.394 [2024-11-20 07:28:31.487771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.394 [2024-11-20 07:28:31.487777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.394 [2024-11-20 07:28:31.487793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-11-20 07:28:31.497888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.394 [2024-11-20 07:28:31.497942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.394 [2024-11-20 07:28:31.497955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.394 [2024-11-20 07:28:31.497963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.394 [2024-11-20 07:28:31.497969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.394 [2024-11-20 07:28:31.497984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-11-20 07:28:31.507844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.394 [2024-11-20 07:28:31.507898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.394 [2024-11-20 07:28:31.507922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.394 [2024-11-20 07:28:31.507931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.394 [2024-11-20 07:28:31.507938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.394 [2024-11-20 07:28:31.507958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-11-20 07:28:31.517845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.394 [2024-11-20 07:28:31.517940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.394 [2024-11-20 07:28:31.517955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.394 [2024-11-20 07:28:31.517963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.394 [2024-11-20 07:28:31.517969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.394 [2024-11-20 07:28:31.517985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-11-20 07:28:31.527919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.394 [2024-11-20 07:28:31.527967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.394 [2024-11-20 07:28:31.527981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.394 [2024-11-20 07:28:31.527988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.394 [2024-11-20 07:28:31.527995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.394 [2024-11-20 07:28:31.528010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-11-20 07:28:31.537990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.395 [2024-11-20 07:28:31.538049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.395 [2024-11-20 07:28:31.538063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.395 [2024-11-20 07:28:31.538070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.395 [2024-11-20 07:28:31.538077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.395 [2024-11-20 07:28:31.538092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-11-20 07:28:31.547980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.395 [2024-11-20 07:28:31.548031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.395 [2024-11-20 07:28:31.548044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.395 [2024-11-20 07:28:31.548052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.395 [2024-11-20 07:28:31.548058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.395 [2024-11-20 07:28:31.548073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-11-20 07:28:31.557956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.395 [2024-11-20 07:28:31.558033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.395 [2024-11-20 07:28:31.558046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.395 [2024-11-20 07:28:31.558059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.395 [2024-11-20 07:28:31.558066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.395 [2024-11-20 07:28:31.558081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-11-20 07:28:31.567980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.395 [2024-11-20 07:28:31.568037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.395 [2024-11-20 07:28:31.568050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.395 [2024-11-20 07:28:31.568057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.395 [2024-11-20 07:28:31.568064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.395 [2024-11-20 07:28:31.568079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-11-20 07:28:31.578078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.395 [2024-11-20 07:28:31.578130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.395 [2024-11-20 07:28:31.578144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.395 [2024-11-20 07:28:31.578151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.395 [2024-11-20 07:28:31.578161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.395 [2024-11-20 07:28:31.578177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-11-20 07:28:31.588045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.395 [2024-11-20 07:28:31.588098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.395 [2024-11-20 07:28:31.588112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.395 [2024-11-20 07:28:31.588119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.395 [2024-11-20 07:28:31.588125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.395 [2024-11-20 07:28:31.588141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-11-20 07:28:31.598088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.395 [2024-11-20 07:28:31.598137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.395 [2024-11-20 07:28:31.598151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.395 [2024-11-20 07:28:31.598163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.395 [2024-11-20 07:28:31.598170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.395 [2024-11-20 07:28:31.598189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-11-20 07:28:31.608092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.395 [2024-11-20 07:28:31.608181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.395 [2024-11-20 07:28:31.608195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.395 [2024-11-20 07:28:31.608203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.395 [2024-11-20 07:28:31.608210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.395 [2024-11-20 07:28:31.608224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-11-20 07:28:31.618200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.395 [2024-11-20 07:28:31.618263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.395 [2024-11-20 07:28:31.618278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.395 [2024-11-20 07:28:31.618286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.395 [2024-11-20 07:28:31.618295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.395 [2024-11-20 07:28:31.618311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-11-20 07:28:31.628197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.395 [2024-11-20 07:28:31.628244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.395 [2024-11-20 07:28:31.628258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.395 [2024-11-20 07:28:31.628265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.395 [2024-11-20 07:28:31.628272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.395 [2024-11-20 07:28:31.628287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-11-20 07:28:31.638264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.395 [2024-11-20 07:28:31.638324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.395 [2024-11-20 07:28:31.638338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.395 [2024-11-20 07:28:31.638345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.395 [2024-11-20 07:28:31.638352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.395 [2024-11-20 07:28:31.638366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-11-20 07:28:31.648147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.395 [2024-11-20 07:28:31.648201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.395 [2024-11-20 07:28:31.648218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.395 [2024-11-20 07:28:31.648225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.395 [2024-11-20 07:28:31.648232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.395 [2024-11-20 07:28:31.648248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-11-20 07:28:31.658269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.395 [2024-11-20 07:28:31.658325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.396 [2024-11-20 07:28:31.658339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.396 [2024-11-20 07:28:31.658346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.396 [2024-11-20 07:28:31.658353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.396 [2024-11-20 07:28:31.658368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.663 [2024-11-20 07:28:31.668301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.663 [2024-11-20 07:28:31.668352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.663 [2024-11-20 07:28:31.668366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.663 [2024-11-20 07:28:31.668373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.663 [2024-11-20 07:28:31.668380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.663 [2024-11-20 07:28:31.668395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-11-20 07:28:31.678362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.663 [2024-11-20 07:28:31.678426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.663 [2024-11-20 07:28:31.678439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.663 [2024-11-20 07:28:31.678446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.663 [2024-11-20 07:28:31.678453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.663 [2024-11-20 07:28:31.678468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-11-20 07:28:31.688349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.663 [2024-11-20 07:28:31.688398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.663 [2024-11-20 07:28:31.688415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.663 [2024-11-20 07:28:31.688423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.663 [2024-11-20 07:28:31.688429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.663 [2024-11-20 07:28:31.688444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-11-20 07:28:31.698419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.663 [2024-11-20 07:28:31.698472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.663 [2024-11-20 07:28:31.698485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.663 [2024-11-20 07:28:31.698493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.663 [2024-11-20 07:28:31.698499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.663 [2024-11-20 07:28:31.698514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-11-20 07:28:31.708446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.663 [2024-11-20 07:28:31.708544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.663 [2024-11-20 07:28:31.708558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.663 [2024-11-20 07:28:31.708566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.663 [2024-11-20 07:28:31.708572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.663 [2024-11-20 07:28:31.708588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-11-20 07:28:31.718435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.663 [2024-11-20 07:28:31.718485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.663 [2024-11-20 07:28:31.718499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.663 [2024-11-20 07:28:31.718506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.663 [2024-11-20 07:28:31.718513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.663 [2024-11-20 07:28:31.718527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-11-20 07:28:31.728471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.663 [2024-11-20 07:28:31.728516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.663 [2024-11-20 07:28:31.728529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.663 [2024-11-20 07:28:31.728536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.663 [2024-11-20 07:28:31.728546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.663 [2024-11-20 07:28:31.728561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-11-20 07:28:31.738513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.663 [2024-11-20 07:28:31.738581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.663 [2024-11-20 07:28:31.738594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.663 [2024-11-20 07:28:31.738601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.663 [2024-11-20 07:28:31.738608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.663 [2024-11-20 07:28:31.738623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-11-20 07:28:31.748552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.663 [2024-11-20 07:28:31.748606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.663 [2024-11-20 07:28:31.748620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.663 [2024-11-20 07:28:31.748627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.663 [2024-11-20 07:28:31.748633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.663 [2024-11-20 07:28:31.748648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-11-20 07:28:31.758508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.663 [2024-11-20 07:28:31.758593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.663 [2024-11-20 07:28:31.758606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.663 [2024-11-20 07:28:31.758614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.663 [2024-11-20 07:28:31.758621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.663 [2024-11-20 07:28:31.758636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.663 qpair failed and we were unable to recover it. 00:30:09.663 [2024-11-20 07:28:31.768553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.663 [2024-11-20 07:28:31.768599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.663 [2024-11-20 07:28:31.768613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.664 [2024-11-20 07:28:31.768620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.664 [2024-11-20 07:28:31.768627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.664 [2024-11-20 07:28:31.768641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-11-20 07:28:31.778639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.664 [2024-11-20 07:28:31.778691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.664 [2024-11-20 07:28:31.778704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.664 [2024-11-20 07:28:31.778711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.664 [2024-11-20 07:28:31.778718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.664 [2024-11-20 07:28:31.778733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-11-20 07:28:31.788626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.664 [2024-11-20 07:28:31.788678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.664 [2024-11-20 07:28:31.788692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.664 [2024-11-20 07:28:31.788699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.664 [2024-11-20 07:28:31.788706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.664 [2024-11-20 07:28:31.788721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-11-20 07:28:31.798649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.664 [2024-11-20 07:28:31.798701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.664 [2024-11-20 07:28:31.798714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.664 [2024-11-20 07:28:31.798721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.664 [2024-11-20 07:28:31.798728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.664 [2024-11-20 07:28:31.798742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-11-20 07:28:31.808669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.664 [2024-11-20 07:28:31.808713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.664 [2024-11-20 07:28:31.808726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.664 [2024-11-20 07:28:31.808734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.664 [2024-11-20 07:28:31.808740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.664 [2024-11-20 07:28:31.808755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-11-20 07:28:31.818744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.664 [2024-11-20 07:28:31.818801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.664 [2024-11-20 07:28:31.818817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.664 [2024-11-20 07:28:31.818825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.664 [2024-11-20 07:28:31.818831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.664 [2024-11-20 07:28:31.818846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-11-20 07:28:31.828738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.664 [2024-11-20 07:28:31.828788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.664 [2024-11-20 07:28:31.828803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.664 [2024-11-20 07:28:31.828810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.664 [2024-11-20 07:28:31.828817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.664 [2024-11-20 07:28:31.828832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-11-20 07:28:31.838730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.664 [2024-11-20 07:28:31.838798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.664 [2024-11-20 07:28:31.838811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.664 [2024-11-20 07:28:31.838818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.664 [2024-11-20 07:28:31.838825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.664 [2024-11-20 07:28:31.838840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-11-20 07:28:31.848781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.664 [2024-11-20 07:28:31.848836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.664 [2024-11-20 07:28:31.848860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.664 [2024-11-20 07:28:31.848871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.664 [2024-11-20 07:28:31.848878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.664 [2024-11-20 07:28:31.848899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-11-20 07:28:31.858856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.664 [2024-11-20 07:28:31.858915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.664 [2024-11-20 07:28:31.858940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.664 [2024-11-20 07:28:31.858949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.664 [2024-11-20 07:28:31.858961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.664 [2024-11-20 07:28:31.858982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-11-20 07:28:31.868857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.664 [2024-11-20 07:28:31.868913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.664 [2024-11-20 07:28:31.868928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.664 [2024-11-20 07:28:31.868936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.664 [2024-11-20 07:28:31.868943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.664 [2024-11-20 07:28:31.868959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-11-20 07:28:31.878852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.664 [2024-11-20 07:28:31.878920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.664 [2024-11-20 07:28:31.878933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.664 [2024-11-20 07:28:31.878941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.664 [2024-11-20 07:28:31.878947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.664 [2024-11-20 07:28:31.878963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-11-20 07:28:31.888902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.664 [2024-11-20 07:28:31.889000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.664 [2024-11-20 07:28:31.889025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.664 [2024-11-20 07:28:31.889034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.664 [2024-11-20 07:28:31.889041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.664 [2024-11-20 07:28:31.889062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.664 qpair failed and we were unable to recover it. 00:30:09.664 [2024-11-20 07:28:31.898956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.664 [2024-11-20 07:28:31.899016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.664 [2024-11-20 07:28:31.899040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.665 [2024-11-20 07:28:31.899049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.665 [2024-11-20 07:28:31.899058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.665 [2024-11-20 07:28:31.899081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-11-20 07:28:31.908960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.665 [2024-11-20 07:28:31.909014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.665 [2024-11-20 07:28:31.909029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.665 [2024-11-20 07:28:31.909036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.665 [2024-11-20 07:28:31.909043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.665 [2024-11-20 07:28:31.909058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-11-20 07:28:31.918977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.665 [2024-11-20 07:28:31.919025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.665 [2024-11-20 07:28:31.919039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.665 [2024-11-20 07:28:31.919046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.665 [2024-11-20 07:28:31.919053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.665 [2024-11-20 07:28:31.919068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.665 [2024-11-20 07:28:31.928986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.665 [2024-11-20 07:28:31.929034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.665 [2024-11-20 07:28:31.929047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.665 [2024-11-20 07:28:31.929055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.665 [2024-11-20 07:28:31.929062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.665 [2024-11-20 07:28:31.929077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.665 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-20 07:28:31.939067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.929 [2024-11-20 07:28:31.939122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.929 [2024-11-20 07:28:31.939136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.929 [2024-11-20 07:28:31.939143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.929 [2024-11-20 07:28:31.939150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.929 [2024-11-20 07:28:31.939172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-20 07:28:31.949044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.929 [2024-11-20 07:28:31.949097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.929 [2024-11-20 07:28:31.949115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.929 [2024-11-20 07:28:31.949122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.929 [2024-11-20 07:28:31.949128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.929 [2024-11-20 07:28:31.949143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-20 07:28:31.959083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.929 [2024-11-20 07:28:31.959131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.929 [2024-11-20 07:28:31.959144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.929 [2024-11-20 07:28:31.959151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.929 [2024-11-20 07:28:31.959162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.929 [2024-11-20 07:28:31.959179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-20 07:28:31.969105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.929 [2024-11-20 07:28:31.969156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.929 [2024-11-20 07:28:31.969174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.929 [2024-11-20 07:28:31.969181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.929 [2024-11-20 07:28:31.969187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.929 [2024-11-20 07:28:31.969202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-20 07:28:31.979181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.929 [2024-11-20 07:28:31.979250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.929 [2024-11-20 07:28:31.979264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.929 [2024-11-20 07:28:31.979271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.929 [2024-11-20 07:28:31.979278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.929 [2024-11-20 07:28:31.979292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-20 07:28:31.989163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.929 [2024-11-20 07:28:31.989259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.929 [2024-11-20 07:28:31.989273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.929 [2024-11-20 07:28:31.989284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.929 [2024-11-20 07:28:31.989291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.929 [2024-11-20 07:28:31.989306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-20 07:28:31.999182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.929 [2024-11-20 07:28:31.999273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.929 [2024-11-20 07:28:31.999287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.929 [2024-11-20 07:28:31.999295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.929 [2024-11-20 07:28:31.999301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.929 [2024-11-20 07:28:31.999316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-20 07:28:32.009201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.929 [2024-11-20 07:28:32.009249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.929 [2024-11-20 07:28:32.009263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.929 [2024-11-20 07:28:32.009270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.929 [2024-11-20 07:28:32.009277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.929 [2024-11-20 07:28:32.009292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-20 07:28:32.019319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.929 [2024-11-20 07:28:32.019377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.929 [2024-11-20 07:28:32.019392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.929 [2024-11-20 07:28:32.019399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.929 [2024-11-20 07:28:32.019406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.929 [2024-11-20 07:28:32.019423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-20 07:28:32.029253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.929 [2024-11-20 07:28:32.029319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.929 [2024-11-20 07:28:32.029333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.929 [2024-11-20 07:28:32.029340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.929 [2024-11-20 07:28:32.029347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.929 [2024-11-20 07:28:32.029362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-20 07:28:32.039271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.929 [2024-11-20 07:28:32.039318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.930 [2024-11-20 07:28:32.039332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.930 [2024-11-20 07:28:32.039339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.930 [2024-11-20 07:28:32.039346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.930 [2024-11-20 07:28:32.039361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-20 07:28:32.049313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.930 [2024-11-20 07:28:32.049362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.930 [2024-11-20 07:28:32.049376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.930 [2024-11-20 07:28:32.049383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.930 [2024-11-20 07:28:32.049390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.930 [2024-11-20 07:28:32.049404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-20 07:28:32.059362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.930 [2024-11-20 07:28:32.059417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.930 [2024-11-20 07:28:32.059431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.930 [2024-11-20 07:28:32.059438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.930 [2024-11-20 07:28:32.059445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.930 [2024-11-20 07:28:32.059460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-20 07:28:32.069433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.930 [2024-11-20 07:28:32.069517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.930 [2024-11-20 07:28:32.069530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.930 [2024-11-20 07:28:32.069539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.930 [2024-11-20 07:28:32.069546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.930 [2024-11-20 07:28:32.069560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-20 07:28:32.079396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.930 [2024-11-20 07:28:32.079473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.930 [2024-11-20 07:28:32.079486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.930 [2024-11-20 07:28:32.079493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.930 [2024-11-20 07:28:32.079500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.930 [2024-11-20 07:28:32.079516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-20 07:28:32.089427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.930 [2024-11-20 07:28:32.089475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.930 [2024-11-20 07:28:32.089488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.930 [2024-11-20 07:28:32.089496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.930 [2024-11-20 07:28:32.089502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.930 [2024-11-20 07:28:32.089517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-20 07:28:32.099378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.930 [2024-11-20 07:28:32.099445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.930 [2024-11-20 07:28:32.099458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.930 [2024-11-20 07:28:32.099466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.930 [2024-11-20 07:28:32.099472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.930 [2024-11-20 07:28:32.099487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-20 07:28:32.109485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.930 [2024-11-20 07:28:32.109574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.930 [2024-11-20 07:28:32.109587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.930 [2024-11-20 07:28:32.109595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.930 [2024-11-20 07:28:32.109601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.930 [2024-11-20 07:28:32.109616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-20 07:28:32.119534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.930 [2024-11-20 07:28:32.119626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.930 [2024-11-20 07:28:32.119639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.930 [2024-11-20 07:28:32.119650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.930 [2024-11-20 07:28:32.119657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.930 [2024-11-20 07:28:32.119672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-20 07:28:32.129489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.930 [2024-11-20 07:28:32.129538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.930 [2024-11-20 07:28:32.129551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.930 [2024-11-20 07:28:32.129559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.930 [2024-11-20 07:28:32.129565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.930 [2024-11-20 07:28:32.129580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-20 07:28:32.139580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.930 [2024-11-20 07:28:32.139634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.930 [2024-11-20 07:28:32.139647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.930 [2024-11-20 07:28:32.139654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.930 [2024-11-20 07:28:32.139661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.930 [2024-11-20 07:28:32.139675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-20 07:28:32.149493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.930 [2024-11-20 07:28:32.149547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.930 [2024-11-20 07:28:32.149560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.930 [2024-11-20 07:28:32.149568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.930 [2024-11-20 07:28:32.149574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.930 [2024-11-20 07:28:32.149589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-20 07:28:32.159593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.930 [2024-11-20 07:28:32.159639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.930 [2024-11-20 07:28:32.159652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.930 [2024-11-20 07:28:32.159660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.930 [2024-11-20 07:28:32.159666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.930 [2024-11-20 07:28:32.159684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-20 07:28:32.169629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.930 [2024-11-20 07:28:32.169679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.931 [2024-11-20 07:28:32.169692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.931 [2024-11-20 07:28:32.169699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.931 [2024-11-20 07:28:32.169706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.931 [2024-11-20 07:28:32.169720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-20 07:28:32.179698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.931 [2024-11-20 07:28:32.179752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.931 [2024-11-20 07:28:32.179765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.931 [2024-11-20 07:28:32.179772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.931 [2024-11-20 07:28:32.179778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.931 [2024-11-20 07:28:32.179793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-20 07:28:32.189681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.931 [2024-11-20 07:28:32.189735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.931 [2024-11-20 07:28:32.189748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.931 [2024-11-20 07:28:32.189756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.931 [2024-11-20 07:28:32.189762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.931 [2024-11-20 07:28:32.189777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-20 07:28:32.199699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.931 [2024-11-20 07:28:32.199751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.931 [2024-11-20 07:28:32.199765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.931 [2024-11-20 07:28:32.199772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.931 [2024-11-20 07:28:32.199778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:09.931 [2024-11-20 07:28:32.199793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.931 qpair failed and we were unable to recover it. 00:30:10.193 [2024-11-20 07:28:32.209625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.193 [2024-11-20 07:28:32.209672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.193 [2024-11-20 07:28:32.209687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.193 [2024-11-20 07:28:32.209694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.193 [2024-11-20 07:28:32.209701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.193 [2024-11-20 07:28:32.209716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.193 qpair failed and we were unable to recover it. 00:30:10.193 [2024-11-20 07:28:32.219672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.193 [2024-11-20 07:28:32.219742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.193 [2024-11-20 07:28:32.219756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.193 [2024-11-20 07:28:32.219763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.193 [2024-11-20 07:28:32.219769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.193 [2024-11-20 07:28:32.219784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.193 qpair failed and we were unable to recover it. 00:30:10.193 [2024-11-20 07:28:32.229799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.193 [2024-11-20 07:28:32.229899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.193 [2024-11-20 07:28:32.229913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.193 [2024-11-20 07:28:32.229921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.193 [2024-11-20 07:28:32.229927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.193 [2024-11-20 07:28:32.229947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.193 qpair failed and we were unable to recover it. 00:30:10.193 [2024-11-20 07:28:32.239792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.193 [2024-11-20 07:28:32.239843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.193 [2024-11-20 07:28:32.239857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.193 [2024-11-20 07:28:32.239864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.193 [2024-11-20 07:28:32.239871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.193 [2024-11-20 07:28:32.239886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.193 qpair failed and we were unable to recover it. 00:30:10.193 [2024-11-20 07:28:32.249878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.193 [2024-11-20 07:28:32.249925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.193 [2024-11-20 07:28:32.249943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.193 [2024-11-20 07:28:32.249950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.193 [2024-11-20 07:28:32.249956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.193 [2024-11-20 07:28:32.249971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.193 qpair failed and we were unable to recover it. 00:30:10.193 [2024-11-20 07:28:32.259805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.193 [2024-11-20 07:28:32.259868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.193 [2024-11-20 07:28:32.259882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.193 [2024-11-20 07:28:32.259889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.193 [2024-11-20 07:28:32.259896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.193 [2024-11-20 07:28:32.259910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.193 qpair failed and we were unable to recover it. 00:30:10.193 [2024-11-20 07:28:32.269918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.193 [2024-11-20 07:28:32.270012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.193 [2024-11-20 07:28:32.270026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.193 [2024-11-20 07:28:32.270034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.193 [2024-11-20 07:28:32.270040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.193 [2024-11-20 07:28:32.270055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.193 qpair failed and we were unable to recover it. 00:30:10.193 [2024-11-20 07:28:32.279911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.193 [2024-11-20 07:28:32.279963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.193 [2024-11-20 07:28:32.279976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.193 [2024-11-20 07:28:32.279984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.193 [2024-11-20 07:28:32.279990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.194 [2024-11-20 07:28:32.280005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.194 qpair failed and we were unable to recover it. 00:30:10.194 [2024-11-20 07:28:32.289925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.194 [2024-11-20 07:28:32.289991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.194 [2024-11-20 07:28:32.290004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.194 [2024-11-20 07:28:32.290011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.194 [2024-11-20 07:28:32.290021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.194 [2024-11-20 07:28:32.290036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.194 qpair failed and we were unable to recover it. 00:30:10.194 [2024-11-20 07:28:32.300020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.194 [2024-11-20 07:28:32.300076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.194 [2024-11-20 07:28:32.300089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.194 [2024-11-20 07:28:32.300097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.194 [2024-11-20 07:28:32.300103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.194 [2024-11-20 07:28:32.300118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.194 qpair failed and we were unable to recover it. 00:30:10.194 [2024-11-20 07:28:32.310051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.194 [2024-11-20 07:28:32.310106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.194 [2024-11-20 07:28:32.310119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.194 [2024-11-20 07:28:32.310126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.194 [2024-11-20 07:28:32.310133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.194 [2024-11-20 07:28:32.310147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.194 qpair failed and we were unable to recover it. 00:30:10.194 [2024-11-20 07:28:32.320035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.194 [2024-11-20 07:28:32.320080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.194 [2024-11-20 07:28:32.320093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.194 [2024-11-20 07:28:32.320100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.194 [2024-11-20 07:28:32.320107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.194 [2024-11-20 07:28:32.320121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.194 qpair failed and we were unable to recover it. 00:30:10.194 [2024-11-20 07:28:32.329981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.194 [2024-11-20 07:28:32.330025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.194 [2024-11-20 07:28:32.330038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.194 [2024-11-20 07:28:32.330046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.194 [2024-11-20 07:28:32.330052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.194 [2024-11-20 07:28:32.330067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.194 qpair failed and we were unable to recover it. 00:30:10.194 [2024-11-20 07:28:32.340130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.194 [2024-11-20 07:28:32.340193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.194 [2024-11-20 07:28:32.340207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.194 [2024-11-20 07:28:32.340214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.194 [2024-11-20 07:28:32.340220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.194 [2024-11-20 07:28:32.340236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.194 qpair failed and we were unable to recover it. 00:30:10.194 [2024-11-20 07:28:32.350112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.194 [2024-11-20 07:28:32.350162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.194 [2024-11-20 07:28:32.350176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.194 [2024-11-20 07:28:32.350183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.194 [2024-11-20 07:28:32.350190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.194 [2024-11-20 07:28:32.350205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.194 qpair failed and we were unable to recover it. 00:30:10.194 [2024-11-20 07:28:32.360135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.194 [2024-11-20 07:28:32.360188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.194 [2024-11-20 07:28:32.360201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.194 [2024-11-20 07:28:32.360208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.194 [2024-11-20 07:28:32.360215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.194 [2024-11-20 07:28:32.360229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.194 qpair failed and we were unable to recover it. 00:30:10.194 [2024-11-20 07:28:32.370162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.194 [2024-11-20 07:28:32.370209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.194 [2024-11-20 07:28:32.370222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.194 [2024-11-20 07:28:32.370229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.194 [2024-11-20 07:28:32.370235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.194 [2024-11-20 07:28:32.370250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.194 qpair failed and we were unable to recover it. 00:30:10.194 [2024-11-20 07:28:32.380120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.194 [2024-11-20 07:28:32.380183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.194 [2024-11-20 07:28:32.380200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.194 [2024-11-20 07:28:32.380207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.194 [2024-11-20 07:28:32.380214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.194 [2024-11-20 07:28:32.380228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.194 qpair failed and we were unable to recover it. 00:30:10.194 [2024-11-20 07:28:32.390221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.194 [2024-11-20 07:28:32.390273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.194 [2024-11-20 07:28:32.390286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.194 [2024-11-20 07:28:32.390293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.194 [2024-11-20 07:28:32.390300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.194 [2024-11-20 07:28:32.390314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.194 qpair failed and we were unable to recover it. 00:30:10.194 [2024-11-20 07:28:32.400115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.194 [2024-11-20 07:28:32.400168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.194 [2024-11-20 07:28:32.400183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.194 [2024-11-20 07:28:32.400190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.194 [2024-11-20 07:28:32.400197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.194 [2024-11-20 07:28:32.400212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.194 qpair failed and we were unable to recover it. 00:30:10.194 [2024-11-20 07:28:32.410249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.194 [2024-11-20 07:28:32.410324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.194 [2024-11-20 07:28:32.410338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.195 [2024-11-20 07:28:32.410345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.195 [2024-11-20 07:28:32.410352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.195 [2024-11-20 07:28:32.410367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.195 qpair failed and we were unable to recover it. 00:30:10.195 [2024-11-20 07:28:32.420346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.195 [2024-11-20 07:28:32.420401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.195 [2024-11-20 07:28:32.420414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.195 [2024-11-20 07:28:32.420422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.195 [2024-11-20 07:28:32.420432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.195 [2024-11-20 07:28:32.420447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.195 qpair failed and we were unable to recover it. 00:30:10.195 [2024-11-20 07:28:32.430337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.195 [2024-11-20 07:28:32.430389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.195 [2024-11-20 07:28:32.430402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.195 [2024-11-20 07:28:32.430409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.195 [2024-11-20 07:28:32.430416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.195 [2024-11-20 07:28:32.430430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.195 qpair failed and we were unable to recover it. 00:30:10.195 [2024-11-20 07:28:32.440336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.195 [2024-11-20 07:28:32.440385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.195 [2024-11-20 07:28:32.440398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.195 [2024-11-20 07:28:32.440406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.195 [2024-11-20 07:28:32.440412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.195 [2024-11-20 07:28:32.440426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.195 qpair failed and we were unable to recover it. 00:30:10.195 [2024-11-20 07:28:32.450362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.195 [2024-11-20 07:28:32.450417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.195 [2024-11-20 07:28:32.450431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.195 [2024-11-20 07:28:32.450438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.195 [2024-11-20 07:28:32.450444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.195 [2024-11-20 07:28:32.450459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.195 qpair failed and we were unable to recover it. 00:30:10.195 [2024-11-20 07:28:32.460420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.195 [2024-11-20 07:28:32.460490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.195 [2024-11-20 07:28:32.460503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.195 [2024-11-20 07:28:32.460510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.195 [2024-11-20 07:28:32.460517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.195 [2024-11-20 07:28:32.460532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.195 qpair failed and we were unable to recover it. 00:30:10.457 [2024-11-20 07:28:32.470454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.457 [2024-11-20 07:28:32.470506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.457 [2024-11-20 07:28:32.470519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.457 [2024-11-20 07:28:32.470526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.457 [2024-11-20 07:28:32.470533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.457 [2024-11-20 07:28:32.470548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.458 [2024-11-20 07:28:32.480460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.458 [2024-11-20 07:28:32.480512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.458 [2024-11-20 07:28:32.480525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.458 [2024-11-20 07:28:32.480533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.458 [2024-11-20 07:28:32.480539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.458 [2024-11-20 07:28:32.480554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-11-20 07:28:32.490370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.458 [2024-11-20 07:28:32.490434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.458 [2024-11-20 07:28:32.490447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.458 [2024-11-20 07:28:32.490454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.458 [2024-11-20 07:28:32.490461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.458 [2024-11-20 07:28:32.490475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-11-20 07:28:32.500541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.458 [2024-11-20 07:28:32.500594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.458 [2024-11-20 07:28:32.500608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.458 [2024-11-20 07:28:32.500615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.458 [2024-11-20 07:28:32.500621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.458 [2024-11-20 07:28:32.500636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-11-20 07:28:32.510517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.458 [2024-11-20 07:28:32.510566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.458 [2024-11-20 07:28:32.510583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.458 [2024-11-20 07:28:32.510590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.458 [2024-11-20 07:28:32.510596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.458 [2024-11-20 07:28:32.510611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-11-20 07:28:32.520561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.458 [2024-11-20 07:28:32.520609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.458 [2024-11-20 07:28:32.520623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.458 [2024-11-20 07:28:32.520630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.458 [2024-11-20 07:28:32.520637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.458 [2024-11-20 07:28:32.520651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-11-20 07:28:32.530451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.458 [2024-11-20 07:28:32.530498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.458 [2024-11-20 07:28:32.530510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.458 [2024-11-20 07:28:32.530518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.458 [2024-11-20 07:28:32.530524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.458 [2024-11-20 07:28:32.530538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-11-20 07:28:32.540654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.458 [2024-11-20 07:28:32.540709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.458 [2024-11-20 07:28:32.540722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.458 [2024-11-20 07:28:32.540729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.458 [2024-11-20 07:28:32.540736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.458 [2024-11-20 07:28:32.540751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-11-20 07:28:32.550527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.458 [2024-11-20 07:28:32.550579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.458 [2024-11-20 07:28:32.550594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.458 [2024-11-20 07:28:32.550605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.458 [2024-11-20 07:28:32.550612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.458 [2024-11-20 07:28:32.550627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-11-20 07:28:32.560615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.458 [2024-11-20 07:28:32.560662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.458 [2024-11-20 07:28:32.560676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.458 [2024-11-20 07:28:32.560683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.458 [2024-11-20 07:28:32.560690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.458 [2024-11-20 07:28:32.560704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-11-20 07:28:32.570678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.458 [2024-11-20 07:28:32.570729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.458 [2024-11-20 07:28:32.570742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.458 [2024-11-20 07:28:32.570749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.458 [2024-11-20 07:28:32.570756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.458 [2024-11-20 07:28:32.570770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-11-20 07:28:32.580760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.458 [2024-11-20 07:28:32.580863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.458 [2024-11-20 07:28:32.580876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.458 [2024-11-20 07:28:32.580883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.458 [2024-11-20 07:28:32.580890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.458 [2024-11-20 07:28:32.580904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-11-20 07:28:32.590757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.458 [2024-11-20 07:28:32.590823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.458 [2024-11-20 07:28:32.590836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.458 [2024-11-20 07:28:32.590843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.458 [2024-11-20 07:28:32.590850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.458 [2024-11-20 07:28:32.590864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-11-20 07:28:32.600684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.458 [2024-11-20 07:28:32.600733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.458 [2024-11-20 07:28:32.600746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.458 [2024-11-20 07:28:32.600754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.458 [2024-11-20 07:28:32.600760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.458 [2024-11-20 07:28:32.600774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-11-20 07:28:32.610816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.459 [2024-11-20 07:28:32.610904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.459 [2024-11-20 07:28:32.610917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.459 [2024-11-20 07:28:32.610925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.459 [2024-11-20 07:28:32.610932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.459 [2024-11-20 07:28:32.610946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-11-20 07:28:32.620869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.459 [2024-11-20 07:28:32.620927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.459 [2024-11-20 07:28:32.620940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.459 [2024-11-20 07:28:32.620947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.459 [2024-11-20 07:28:32.620954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.459 [2024-11-20 07:28:32.620968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-11-20 07:28:32.630814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.459 [2024-11-20 07:28:32.630867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.459 [2024-11-20 07:28:32.630880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.459 [2024-11-20 07:28:32.630887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.459 [2024-11-20 07:28:32.630894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.459 [2024-11-20 07:28:32.630908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-11-20 07:28:32.640855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.459 [2024-11-20 07:28:32.640910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.459 [2024-11-20 07:28:32.640923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.459 [2024-11-20 07:28:32.640930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.459 [2024-11-20 07:28:32.640937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.459 [2024-11-20 07:28:32.640952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-11-20 07:28:32.650878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.459 [2024-11-20 07:28:32.650924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.459 [2024-11-20 07:28:32.650938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.459 [2024-11-20 07:28:32.650946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.459 [2024-11-20 07:28:32.650952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.459 [2024-11-20 07:28:32.650967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-11-20 07:28:32.660982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.459 [2024-11-20 07:28:32.661036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.459 [2024-11-20 07:28:32.661049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.459 [2024-11-20 07:28:32.661056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.459 [2024-11-20 07:28:32.661062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.459 [2024-11-20 07:28:32.661077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-11-20 07:28:32.670929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.459 [2024-11-20 07:28:32.671005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.459 [2024-11-20 07:28:32.671019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.459 [2024-11-20 07:28:32.671026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.459 [2024-11-20 07:28:32.671033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.459 [2024-11-20 07:28:32.671047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-11-20 07:28:32.680855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.459 [2024-11-20 07:28:32.680908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.459 [2024-11-20 07:28:32.680921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.459 [2024-11-20 07:28:32.680932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.459 [2024-11-20 07:28:32.680939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.459 [2024-11-20 07:28:32.680953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-11-20 07:28:32.691017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.459 [2024-11-20 07:28:32.691066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.459 [2024-11-20 07:28:32.691079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.459 [2024-11-20 07:28:32.691087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.459 [2024-11-20 07:28:32.691093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.459 [2024-11-20 07:28:32.691107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-11-20 07:28:32.701076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.459 [2024-11-20 07:28:32.701129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.459 [2024-11-20 07:28:32.701143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.459 [2024-11-20 07:28:32.701150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.459 [2024-11-20 07:28:32.701156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.459 [2024-11-20 07:28:32.701175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-11-20 07:28:32.711076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.459 [2024-11-20 07:28:32.711167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.459 [2024-11-20 07:28:32.711181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.459 [2024-11-20 07:28:32.711188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.459 [2024-11-20 07:28:32.711194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.459 [2024-11-20 07:28:32.711209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-11-20 07:28:32.721087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.459 [2024-11-20 07:28:32.721134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.459 [2024-11-20 07:28:32.721147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.459 [2024-11-20 07:28:32.721154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.459 [2024-11-20 07:28:32.721164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.459 [2024-11-20 07:28:32.721182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.722 [2024-11-20 07:28:32.731114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.722 [2024-11-20 07:28:32.731165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.722 [2024-11-20 07:28:32.731179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.722 [2024-11-20 07:28:32.731186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.722 [2024-11-20 07:28:32.731193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.722 [2024-11-20 07:28:32.731207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.722 qpair failed and we were unable to recover it. 00:30:10.722 [2024-11-20 07:28:32.741155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.722 [2024-11-20 07:28:32.741223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.722 [2024-11-20 07:28:32.741236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.722 [2024-11-20 07:28:32.741244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.722 [2024-11-20 07:28:32.741250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.722 [2024-11-20 07:28:32.741265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.722 qpair failed and we were unable to recover it. 00:30:10.722 [2024-11-20 07:28:32.751145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.722 [2024-11-20 07:28:32.751246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.722 [2024-11-20 07:28:32.751259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.722 [2024-11-20 07:28:32.751266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.722 [2024-11-20 07:28:32.751273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.722 [2024-11-20 07:28:32.751289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.722 qpair failed and we were unable to recover it. 00:30:10.722 [2024-11-20 07:28:32.761185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.722 [2024-11-20 07:28:32.761245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.722 [2024-11-20 07:28:32.761258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.722 [2024-11-20 07:28:32.761265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.722 [2024-11-20 07:28:32.761272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.722 [2024-11-20 07:28:32.761286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.722 qpair failed and we were unable to recover it. 00:30:10.722 [2024-11-20 07:28:32.771228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.722 [2024-11-20 07:28:32.771277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.722 [2024-11-20 07:28:32.771290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.722 [2024-11-20 07:28:32.771297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.722 [2024-11-20 07:28:32.771303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.722 [2024-11-20 07:28:32.771318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.722 qpair failed and we were unable to recover it. 00:30:10.722 [2024-11-20 07:28:32.781305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.722 [2024-11-20 07:28:32.781361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.722 [2024-11-20 07:28:32.781374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.722 [2024-11-20 07:28:32.781381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.722 [2024-11-20 07:28:32.781388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.722 [2024-11-20 07:28:32.781402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.722 qpair failed and we were unable to recover it. 00:30:10.722 [2024-11-20 07:28:32.791304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.722 [2024-11-20 07:28:32.791357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.722 [2024-11-20 07:28:32.791369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.722 [2024-11-20 07:28:32.791377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.722 [2024-11-20 07:28:32.791383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.722 [2024-11-20 07:28:32.791397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.722 qpair failed and we were unable to recover it. 00:30:10.722 [2024-11-20 07:28:32.801315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.722 [2024-11-20 07:28:32.801367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.722 [2024-11-20 07:28:32.801380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.722 [2024-11-20 07:28:32.801387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.722 [2024-11-20 07:28:32.801393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.722 [2024-11-20 07:28:32.801408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.722 qpair failed and we were unable to recover it. 00:30:10.722 [2024-11-20 07:28:32.811301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.722 [2024-11-20 07:28:32.811349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.722 [2024-11-20 07:28:32.811365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.722 [2024-11-20 07:28:32.811372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.722 [2024-11-20 07:28:32.811379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.722 [2024-11-20 07:28:32.811393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.722 qpair failed and we were unable to recover it. 00:30:10.722 [2024-11-20 07:28:32.821433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.722 [2024-11-20 07:28:32.821488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.722 [2024-11-20 07:28:32.821501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.722 [2024-11-20 07:28:32.821509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.722 [2024-11-20 07:28:32.821515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.722 [2024-11-20 07:28:32.821529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.722 qpair failed and we were unable to recover it. 00:30:10.722 [2024-11-20 07:28:32.831411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.722 [2024-11-20 07:28:32.831493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.722 [2024-11-20 07:28:32.831506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.722 [2024-11-20 07:28:32.831513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.722 [2024-11-20 07:28:32.831519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.722 [2024-11-20 07:28:32.831534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.722 qpair failed and we were unable to recover it. 00:30:10.722 [2024-11-20 07:28:32.841415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.722 [2024-11-20 07:28:32.841464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.722 [2024-11-20 07:28:32.841477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.722 [2024-11-20 07:28:32.841484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.722 [2024-11-20 07:28:32.841490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.723 [2024-11-20 07:28:32.841504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.723 qpair failed and we were unable to recover it. 00:30:10.723 [2024-11-20 07:28:32.851433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.723 [2024-11-20 07:28:32.851496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.723 [2024-11-20 07:28:32.851509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.723 [2024-11-20 07:28:32.851516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.723 [2024-11-20 07:28:32.851526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.723 [2024-11-20 07:28:32.851541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.723 qpair failed and we were unable to recover it. 00:30:10.723 [2024-11-20 07:28:32.861520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.723 [2024-11-20 07:28:32.861574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.723 [2024-11-20 07:28:32.861587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.723 [2024-11-20 07:28:32.861594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.723 [2024-11-20 07:28:32.861601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.723 [2024-11-20 07:28:32.861615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.723 qpair failed and we were unable to recover it. 00:30:10.723 [2024-11-20 07:28:32.871445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.723 [2024-11-20 07:28:32.871547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.723 [2024-11-20 07:28:32.871562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.723 [2024-11-20 07:28:32.871569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.723 [2024-11-20 07:28:32.871576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.723 [2024-11-20 07:28:32.871591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.723 qpair failed and we were unable to recover it. 00:30:10.723 [2024-11-20 07:28:32.881518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.723 [2024-11-20 07:28:32.881569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.723 [2024-11-20 07:28:32.881582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.723 [2024-11-20 07:28:32.881590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.723 [2024-11-20 07:28:32.881596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.723 [2024-11-20 07:28:32.881610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.723 qpair failed and we were unable to recover it. 00:30:10.723 [2024-11-20 07:28:32.891565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.723 [2024-11-20 07:28:32.891613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.723 [2024-11-20 07:28:32.891626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.723 [2024-11-20 07:28:32.891633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.723 [2024-11-20 07:28:32.891640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.723 [2024-11-20 07:28:32.891654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.723 qpair failed and we were unable to recover it. 00:30:10.723 [2024-11-20 07:28:32.901665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.723 [2024-11-20 07:28:32.901721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.723 [2024-11-20 07:28:32.901734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.723 [2024-11-20 07:28:32.901742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.723 [2024-11-20 07:28:32.901748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.723 [2024-11-20 07:28:32.901762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.723 qpair failed and we were unable to recover it. 00:30:10.723 [2024-11-20 07:28:32.911629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.723 [2024-11-20 07:28:32.911676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.723 [2024-11-20 07:28:32.911689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.723 [2024-11-20 07:28:32.911696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.723 [2024-11-20 07:28:32.911703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.723 [2024-11-20 07:28:32.911717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.723 qpair failed and we were unable to recover it. 00:30:10.723 [2024-11-20 07:28:32.921647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.723 [2024-11-20 07:28:32.921696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.723 [2024-11-20 07:28:32.921709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.723 [2024-11-20 07:28:32.921716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.723 [2024-11-20 07:28:32.921723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.723 [2024-11-20 07:28:32.921738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.723 qpair failed and we were unable to recover it. 00:30:10.723 [2024-11-20 07:28:32.931667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.723 [2024-11-20 07:28:32.931752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.723 [2024-11-20 07:28:32.931765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.723 [2024-11-20 07:28:32.931773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.723 [2024-11-20 07:28:32.931779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.723 [2024-11-20 07:28:32.931793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.723 qpair failed and we were unable to recover it. 00:30:10.723 [2024-11-20 07:28:32.941724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.723 [2024-11-20 07:28:32.941781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.723 [2024-11-20 07:28:32.941797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.723 [2024-11-20 07:28:32.941804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.723 [2024-11-20 07:28:32.941810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.723 [2024-11-20 07:28:32.941825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.723 qpair failed and we were unable to recover it. 00:30:10.723 [2024-11-20 07:28:32.951721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.723 [2024-11-20 07:28:32.951772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.723 [2024-11-20 07:28:32.951785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.723 [2024-11-20 07:28:32.951792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.723 [2024-11-20 07:28:32.951799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.723 [2024-11-20 07:28:32.951813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.723 qpair failed and we were unable to recover it. 00:30:10.723 [2024-11-20 07:28:32.961747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.723 [2024-11-20 07:28:32.961800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.723 [2024-11-20 07:28:32.961813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.723 [2024-11-20 07:28:32.961820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.723 [2024-11-20 07:28:32.961826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.723 [2024-11-20 07:28:32.961841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.723 qpair failed and we were unable to recover it. 00:30:10.723 [2024-11-20 07:28:32.971767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.723 [2024-11-20 07:28:32.971858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.723 [2024-11-20 07:28:32.971871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.723 [2024-11-20 07:28:32.971879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.723 [2024-11-20 07:28:32.971886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.724 [2024-11-20 07:28:32.971901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.724 qpair failed and we were unable to recover it. 00:30:10.724 [2024-11-20 07:28:32.981839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.724 [2024-11-20 07:28:32.981927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.724 [2024-11-20 07:28:32.981940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.724 [2024-11-20 07:28:32.981948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.724 [2024-11-20 07:28:32.981957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.724 [2024-11-20 07:28:32.981973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.724 qpair failed and we were unable to recover it. 00:30:10.724 [2024-11-20 07:28:32.991847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.724 [2024-11-20 07:28:32.991913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.724 [2024-11-20 07:28:32.991929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.724 [2024-11-20 07:28:32.991936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.724 [2024-11-20 07:28:32.991943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.724 [2024-11-20 07:28:32.991958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.724 qpair failed and we were unable to recover it. 00:30:10.986 [2024-11-20 07:28:33.001728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.986 [2024-11-20 07:28:33.001775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.986 [2024-11-20 07:28:33.001790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.986 [2024-11-20 07:28:33.001798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.986 [2024-11-20 07:28:33.001804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.986 [2024-11-20 07:28:33.001825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.986 qpair failed and we were unable to recover it. 00:30:10.986 [2024-11-20 07:28:33.011881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.986 [2024-11-20 07:28:33.011934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.986 [2024-11-20 07:28:33.011949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.986 [2024-11-20 07:28:33.011956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.986 [2024-11-20 07:28:33.011963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.986 [2024-11-20 07:28:33.011977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.986 qpair failed and we were unable to recover it. 00:30:10.986 [2024-11-20 07:28:33.021823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.986 [2024-11-20 07:28:33.021884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.986 [2024-11-20 07:28:33.021897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.986 [2024-11-20 07:28:33.021904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.986 [2024-11-20 07:28:33.021911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.986 [2024-11-20 07:28:33.021925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.986 qpair failed and we were unable to recover it. 00:30:10.986 [2024-11-20 07:28:33.031926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.986 [2024-11-20 07:28:33.032005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.986 [2024-11-20 07:28:33.032029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.986 [2024-11-20 07:28:33.032039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.986 [2024-11-20 07:28:33.032046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.986 [2024-11-20 07:28:33.032066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.986 qpair failed and we were unable to recover it. 00:30:10.986 [2024-11-20 07:28:33.041990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.986 [2024-11-20 07:28:33.042070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.986 [2024-11-20 07:28:33.042085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.986 [2024-11-20 07:28:33.042092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.986 [2024-11-20 07:28:33.042099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.986 [2024-11-20 07:28:33.042114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.986 qpair failed and we were unable to recover it. 00:30:10.986 [2024-11-20 07:28:33.051985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.986 [2024-11-20 07:28:33.052035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.987 [2024-11-20 07:28:33.052049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.987 [2024-11-20 07:28:33.052056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.987 [2024-11-20 07:28:33.052063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.987 [2024-11-20 07:28:33.052078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.987 qpair failed and we were unable to recover it. 00:30:10.987 [2024-11-20 07:28:33.062051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.987 [2024-11-20 07:28:33.062104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.987 [2024-11-20 07:28:33.062117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.987 [2024-11-20 07:28:33.062125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.987 [2024-11-20 07:28:33.062132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.987 [2024-11-20 07:28:33.062146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.987 qpair failed and we were unable to recover it. 00:30:10.987 [2024-11-20 07:28:33.072040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.987 [2024-11-20 07:28:33.072137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.987 [2024-11-20 07:28:33.072155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.987 [2024-11-20 07:28:33.072166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.987 [2024-11-20 07:28:33.072173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.987 [2024-11-20 07:28:33.072188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.987 qpair failed and we were unable to recover it. 00:30:10.987 [2024-11-20 07:28:33.082056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.987 [2024-11-20 07:28:33.082104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.987 [2024-11-20 07:28:33.082118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.987 [2024-11-20 07:28:33.082125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.987 [2024-11-20 07:28:33.082131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.987 [2024-11-20 07:28:33.082146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.987 qpair failed and we were unable to recover it. 00:30:10.987 [2024-11-20 07:28:33.092090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.987 [2024-11-20 07:28:33.092138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.987 [2024-11-20 07:28:33.092151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.987 [2024-11-20 07:28:33.092162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.987 [2024-11-20 07:28:33.092169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.987 [2024-11-20 07:28:33.092184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.987 qpair failed and we were unable to recover it. 00:30:10.987 [2024-11-20 07:28:33.102170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.987 [2024-11-20 07:28:33.102228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.987 [2024-11-20 07:28:33.102241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.987 [2024-11-20 07:28:33.102248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.987 [2024-11-20 07:28:33.102255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.987 [2024-11-20 07:28:33.102270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.987 qpair failed and we were unable to recover it. 00:30:10.987 [2024-11-20 07:28:33.112164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.987 [2024-11-20 07:28:33.112209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.987 [2024-11-20 07:28:33.112223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.987 [2024-11-20 07:28:33.112234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.987 [2024-11-20 07:28:33.112240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.987 [2024-11-20 07:28:33.112255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.987 qpair failed and we were unable to recover it. 00:30:10.987 [2024-11-20 07:28:33.122171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.987 [2024-11-20 07:28:33.122221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.987 [2024-11-20 07:28:33.122234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.987 [2024-11-20 07:28:33.122242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.987 [2024-11-20 07:28:33.122248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.987 [2024-11-20 07:28:33.122263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.987 qpair failed and we were unable to recover it. 00:30:10.987 [2024-11-20 07:28:33.132224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.987 [2024-11-20 07:28:33.132313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.987 [2024-11-20 07:28:33.132326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.987 [2024-11-20 07:28:33.132334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.987 [2024-11-20 07:28:33.132341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.987 [2024-11-20 07:28:33.132355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.987 qpair failed and we were unable to recover it. 00:30:10.987 [2024-11-20 07:28:33.142275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.987 [2024-11-20 07:28:33.142377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.987 [2024-11-20 07:28:33.142391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.987 [2024-11-20 07:28:33.142398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.987 [2024-11-20 07:28:33.142405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.987 [2024-11-20 07:28:33.142419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.987 qpair failed and we were unable to recover it. 00:30:10.987 [2024-11-20 07:28:33.152283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.987 [2024-11-20 07:28:33.152336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.987 [2024-11-20 07:28:33.152349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.987 [2024-11-20 07:28:33.152356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.987 [2024-11-20 07:28:33.152363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.987 [2024-11-20 07:28:33.152381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.987 qpair failed and we were unable to recover it. 00:30:10.987 [2024-11-20 07:28:33.162311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.987 [2024-11-20 07:28:33.162384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.987 [2024-11-20 07:28:33.162396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.987 [2024-11-20 07:28:33.162404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.987 [2024-11-20 07:28:33.162411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.987 [2024-11-20 07:28:33.162426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.987 qpair failed and we were unable to recover it. 00:30:10.987 [2024-11-20 07:28:33.172308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.987 [2024-11-20 07:28:33.172356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.987 [2024-11-20 07:28:33.172369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.987 [2024-11-20 07:28:33.172376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.987 [2024-11-20 07:28:33.172384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.987 [2024-11-20 07:28:33.172398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.987 qpair failed and we were unable to recover it. 00:30:10.987 [2024-11-20 07:28:33.182355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.987 [2024-11-20 07:28:33.182434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.988 [2024-11-20 07:28:33.182447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.988 [2024-11-20 07:28:33.182454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.988 [2024-11-20 07:28:33.182461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.988 [2024-11-20 07:28:33.182475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.988 qpair failed and we were unable to recover it. 00:30:10.988 [2024-11-20 07:28:33.192430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.988 [2024-11-20 07:28:33.192516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.988 [2024-11-20 07:28:33.192529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.988 [2024-11-20 07:28:33.192537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.988 [2024-11-20 07:28:33.192544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.988 [2024-11-20 07:28:33.192558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.988 qpair failed and we were unable to recover it. 00:30:10.988 [2024-11-20 07:28:33.202388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.988 [2024-11-20 07:28:33.202452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.988 [2024-11-20 07:28:33.202465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.988 [2024-11-20 07:28:33.202472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.988 [2024-11-20 07:28:33.202478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.988 [2024-11-20 07:28:33.202492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.988 qpair failed and we were unable to recover it. 00:30:10.988 [2024-11-20 07:28:33.212303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.988 [2024-11-20 07:28:33.212354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.988 [2024-11-20 07:28:33.212368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.988 [2024-11-20 07:28:33.212375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.988 [2024-11-20 07:28:33.212381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.988 [2024-11-20 07:28:33.212396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.988 qpair failed and we were unable to recover it. 00:30:10.988 [2024-11-20 07:28:33.222506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.988 [2024-11-20 07:28:33.222558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.988 [2024-11-20 07:28:33.222571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.988 [2024-11-20 07:28:33.222579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.988 [2024-11-20 07:28:33.222585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.988 [2024-11-20 07:28:33.222599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.988 qpair failed and we were unable to recover it. 00:30:10.988 [2024-11-20 07:28:33.232497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.988 [2024-11-20 07:28:33.232585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.988 [2024-11-20 07:28:33.232598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.988 [2024-11-20 07:28:33.232606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.988 [2024-11-20 07:28:33.232613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.988 [2024-11-20 07:28:33.232627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.988 qpair failed and we were unable to recover it. 00:30:10.988 [2024-11-20 07:28:33.242506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.988 [2024-11-20 07:28:33.242554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.988 [2024-11-20 07:28:33.242567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.988 [2024-11-20 07:28:33.242577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.988 [2024-11-20 07:28:33.242584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.988 [2024-11-20 07:28:33.242599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.988 qpair failed and we were unable to recover it. 00:30:10.988 [2024-11-20 07:28:33.252528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.988 [2024-11-20 07:28:33.252572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.988 [2024-11-20 07:28:33.252586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.988 [2024-11-20 07:28:33.252593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.988 [2024-11-20 07:28:33.252599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:10.988 [2024-11-20 07:28:33.252614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.988 qpair failed and we were unable to recover it. 00:30:11.250 [2024-11-20 07:28:33.262602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.250 [2024-11-20 07:28:33.262659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.250 [2024-11-20 07:28:33.262672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.250 [2024-11-20 07:28:33.262679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.250 [2024-11-20 07:28:33.262686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.250 [2024-11-20 07:28:33.262701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.250 qpair failed and we were unable to recover it. 00:30:11.250 [2024-11-20 07:28:33.272599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.250 [2024-11-20 07:28:33.272649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.250 [2024-11-20 07:28:33.272662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.250 [2024-11-20 07:28:33.272669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.250 [2024-11-20 07:28:33.272676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.250 [2024-11-20 07:28:33.272690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.250 qpair failed and we were unable to recover it. 00:30:11.250 [2024-11-20 07:28:33.282613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.250 [2024-11-20 07:28:33.282664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.250 [2024-11-20 07:28:33.282677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.250 [2024-11-20 07:28:33.282685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.250 [2024-11-20 07:28:33.282692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.250 [2024-11-20 07:28:33.282710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.250 qpair failed and we were unable to recover it. 00:30:11.250 [2024-11-20 07:28:33.292618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.250 [2024-11-20 07:28:33.292668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.250 [2024-11-20 07:28:33.292682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.250 [2024-11-20 07:28:33.292689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.250 [2024-11-20 07:28:33.292695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.250 [2024-11-20 07:28:33.292709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.250 qpair failed and we were unable to recover it. 00:30:11.250 [2024-11-20 07:28:33.302689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.250 [2024-11-20 07:28:33.302743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.250 [2024-11-20 07:28:33.302756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.250 [2024-11-20 07:28:33.302764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.250 [2024-11-20 07:28:33.302770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.250 [2024-11-20 07:28:33.302785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.250 qpair failed and we were unable to recover it. 00:30:11.250 [2024-11-20 07:28:33.312686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.250 [2024-11-20 07:28:33.312737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.250 [2024-11-20 07:28:33.312751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.250 [2024-11-20 07:28:33.312758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.250 [2024-11-20 07:28:33.312765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.250 [2024-11-20 07:28:33.312780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.250 qpair failed and we were unable to recover it. 00:30:11.250 [2024-11-20 07:28:33.322722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.250 [2024-11-20 07:28:33.322772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.250 [2024-11-20 07:28:33.322785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.250 [2024-11-20 07:28:33.322792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.250 [2024-11-20 07:28:33.322799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.250 [2024-11-20 07:28:33.322814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.250 qpair failed and we were unable to recover it. 00:30:11.250 [2024-11-20 07:28:33.332746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.251 [2024-11-20 07:28:33.332797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.251 [2024-11-20 07:28:33.332811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.251 [2024-11-20 07:28:33.332818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.251 [2024-11-20 07:28:33.332826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.251 [2024-11-20 07:28:33.332840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.251 qpair failed and we were unable to recover it. 00:30:11.251 [2024-11-20 07:28:33.342690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.251 [2024-11-20 07:28:33.342743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.251 [2024-11-20 07:28:33.342756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.251 [2024-11-20 07:28:33.342764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.251 [2024-11-20 07:28:33.342770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.251 [2024-11-20 07:28:33.342784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.251 qpair failed and we were unable to recover it. 00:30:11.251 [2024-11-20 07:28:33.352780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.251 [2024-11-20 07:28:33.352831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.251 [2024-11-20 07:28:33.352844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.251 [2024-11-20 07:28:33.352852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.251 [2024-11-20 07:28:33.352859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.251 [2024-11-20 07:28:33.352873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.251 qpair failed and we were unable to recover it. 00:30:11.251 [2024-11-20 07:28:33.362812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.251 [2024-11-20 07:28:33.362857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.251 [2024-11-20 07:28:33.362872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.251 [2024-11-20 07:28:33.362879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.251 [2024-11-20 07:28:33.362886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.251 [2024-11-20 07:28:33.362901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.251 qpair failed and we were unable to recover it. 00:30:11.251 [2024-11-20 07:28:33.372715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.251 [2024-11-20 07:28:33.372765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.251 [2024-11-20 07:28:33.372782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.251 [2024-11-20 07:28:33.372789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.251 [2024-11-20 07:28:33.372795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.251 [2024-11-20 07:28:33.372811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.251 qpair failed and we were unable to recover it. 00:30:11.251 [2024-11-20 07:28:33.382830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.251 [2024-11-20 07:28:33.382881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.251 [2024-11-20 07:28:33.382896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.251 [2024-11-20 07:28:33.382903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.251 [2024-11-20 07:28:33.382910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.251 [2024-11-20 07:28:33.382926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.251 qpair failed and we were unable to recover it. 00:30:11.251 [2024-11-20 07:28:33.392926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.251 [2024-11-20 07:28:33.392983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.251 [2024-11-20 07:28:33.392997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.251 [2024-11-20 07:28:33.393004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.251 [2024-11-20 07:28:33.393011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.251 [2024-11-20 07:28:33.393025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.251 qpair failed and we were unable to recover it. 00:30:11.251 [2024-11-20 07:28:33.402950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.251 [2024-11-20 07:28:33.402997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.251 [2024-11-20 07:28:33.403011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.251 [2024-11-20 07:28:33.403018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.251 [2024-11-20 07:28:33.403025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.251 [2024-11-20 07:28:33.403040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.251 qpair failed and we were unable to recover it. 00:30:11.251 [2024-11-20 07:28:33.413004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.251 [2024-11-20 07:28:33.413084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.251 [2024-11-20 07:28:33.413098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.251 [2024-11-20 07:28:33.413106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.251 [2024-11-20 07:28:33.413116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.251 [2024-11-20 07:28:33.413135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.251 qpair failed and we were unable to recover it. 00:30:11.251 [2024-11-20 07:28:33.423036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.251 [2024-11-20 07:28:33.423089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.251 [2024-11-20 07:28:33.423103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.251 [2024-11-20 07:28:33.423111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.251 [2024-11-20 07:28:33.423117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.251 [2024-11-20 07:28:33.423132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.251 qpair failed and we were unable to recover it. 00:30:11.251 [2024-11-20 07:28:33.433005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.251 [2024-11-20 07:28:33.433050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.251 [2024-11-20 07:28:33.433064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.251 [2024-11-20 07:28:33.433071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.251 [2024-11-20 07:28:33.433077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.251 [2024-11-20 07:28:33.433091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.251 qpair failed and we were unable to recover it. 00:30:11.251 [2024-11-20 07:28:33.443048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.251 [2024-11-20 07:28:33.443098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.251 [2024-11-20 07:28:33.443112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.251 [2024-11-20 07:28:33.443119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.251 [2024-11-20 07:28:33.443126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.251 [2024-11-20 07:28:33.443140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.251 qpair failed and we were unable to recover it. 00:30:11.251 [2024-11-20 07:28:33.453026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.251 [2024-11-20 07:28:33.453071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.251 [2024-11-20 07:28:33.453084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.251 [2024-11-20 07:28:33.453091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.251 [2024-11-20 07:28:33.453098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.251 [2024-11-20 07:28:33.453113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.251 qpair failed and we were unable to recover it. 00:30:11.251 [2024-11-20 07:28:33.463108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.252 [2024-11-20 07:28:33.463168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.252 [2024-11-20 07:28:33.463182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.252 [2024-11-20 07:28:33.463190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.252 [2024-11-20 07:28:33.463196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.252 [2024-11-20 07:28:33.463211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.252 qpair failed and we were unable to recover it. 00:30:11.252 [2024-11-20 07:28:33.473121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.252 [2024-11-20 07:28:33.473178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.252 [2024-11-20 07:28:33.473191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.252 [2024-11-20 07:28:33.473198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.252 [2024-11-20 07:28:33.473205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.252 [2024-11-20 07:28:33.473220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.252 qpair failed and we were unable to recover it. 00:30:11.252 [2024-11-20 07:28:33.483162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.252 [2024-11-20 07:28:33.483207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.252 [2024-11-20 07:28:33.483220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.252 [2024-11-20 07:28:33.483227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.252 [2024-11-20 07:28:33.483233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.252 [2024-11-20 07:28:33.483248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.252 qpair failed and we were unable to recover it. 00:30:11.252 [2024-11-20 07:28:33.493163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.252 [2024-11-20 07:28:33.493209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.252 [2024-11-20 07:28:33.493222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.252 [2024-11-20 07:28:33.493229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.252 [2024-11-20 07:28:33.493236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.252 [2024-11-20 07:28:33.493251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.252 qpair failed and we were unable to recover it. 00:30:11.252 [2024-11-20 07:28:33.503239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.252 [2024-11-20 07:28:33.503298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.252 [2024-11-20 07:28:33.503315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.252 [2024-11-20 07:28:33.503322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.252 [2024-11-20 07:28:33.503329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.252 [2024-11-20 07:28:33.503343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.252 qpair failed and we were unable to recover it. 00:30:11.252 [2024-11-20 07:28:33.513197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.252 [2024-11-20 07:28:33.513253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.252 [2024-11-20 07:28:33.513277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.252 [2024-11-20 07:28:33.513285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.252 [2024-11-20 07:28:33.513292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.252 [2024-11-20 07:28:33.513309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.252 qpair failed and we were unable to recover it. 00:30:11.514 [2024-11-20 07:28:33.523245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.514 [2024-11-20 07:28:33.523293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.514 [2024-11-20 07:28:33.523306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.514 [2024-11-20 07:28:33.523314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.514 [2024-11-20 07:28:33.523322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.514 [2024-11-20 07:28:33.523337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.514 qpair failed and we were unable to recover it. 00:30:11.514 [2024-11-20 07:28:33.533256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.514 [2024-11-20 07:28:33.533308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.514 [2024-11-20 07:28:33.533321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.514 [2024-11-20 07:28:33.533329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.514 [2024-11-20 07:28:33.533335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.514 [2024-11-20 07:28:33.533351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.514 qpair failed and we were unable to recover it. 00:30:11.514 [2024-11-20 07:28:33.543341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.514 [2024-11-20 07:28:33.543427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.514 [2024-11-20 07:28:33.543441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.514 [2024-11-20 07:28:33.543450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.514 [2024-11-20 07:28:33.543460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.514 [2024-11-20 07:28:33.543475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.514 qpair failed and we were unable to recover it. 00:30:11.514 [2024-11-20 07:28:33.553219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.514 [2024-11-20 07:28:33.553266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.514 [2024-11-20 07:28:33.553280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.514 [2024-11-20 07:28:33.553288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.514 [2024-11-20 07:28:33.553295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.514 [2024-11-20 07:28:33.553310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.514 qpair failed and we were unable to recover it. 00:30:11.514 [2024-11-20 07:28:33.563413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.514 [2024-11-20 07:28:33.563456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.514 [2024-11-20 07:28:33.563470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.514 [2024-11-20 07:28:33.563477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.514 [2024-11-20 07:28:33.563483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.514 [2024-11-20 07:28:33.563498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.514 qpair failed and we were unable to recover it. 00:30:11.514 [2024-11-20 07:28:33.573377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.514 [2024-11-20 07:28:33.573427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.514 [2024-11-20 07:28:33.573440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.514 [2024-11-20 07:28:33.573447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.514 [2024-11-20 07:28:33.573454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.514 [2024-11-20 07:28:33.573468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.514 qpair failed and we were unable to recover it. 00:30:11.514 [2024-11-20 07:28:33.583471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.514 [2024-11-20 07:28:33.583525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.514 [2024-11-20 07:28:33.583538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.514 [2024-11-20 07:28:33.583546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.514 [2024-11-20 07:28:33.583552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.514 [2024-11-20 07:28:33.583567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.514 qpair failed and we were unable to recover it. 00:30:11.515 [2024-11-20 07:28:33.593469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.515 [2024-11-20 07:28:33.593517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.515 [2024-11-20 07:28:33.593531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.515 [2024-11-20 07:28:33.593538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.515 [2024-11-20 07:28:33.593545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.515 [2024-11-20 07:28:33.593559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.515 qpair failed and we were unable to recover it. 00:30:11.515 [2024-11-20 07:28:33.603452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.515 [2024-11-20 07:28:33.603505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.515 [2024-11-20 07:28:33.603517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.515 [2024-11-20 07:28:33.603525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.515 [2024-11-20 07:28:33.603532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.515 [2024-11-20 07:28:33.603546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.515 qpair failed and we were unable to recover it. 00:30:11.515 [2024-11-20 07:28:33.613488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.515 [2024-11-20 07:28:33.613536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.515 [2024-11-20 07:28:33.613549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.515 [2024-11-20 07:28:33.613556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.515 [2024-11-20 07:28:33.613563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.515 [2024-11-20 07:28:33.613577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.515 qpair failed and we were unable to recover it. 00:30:11.515 [2024-11-20 07:28:33.623534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.515 [2024-11-20 07:28:33.623588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.515 [2024-11-20 07:28:33.623601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.515 [2024-11-20 07:28:33.623608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.515 [2024-11-20 07:28:33.623615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.515 [2024-11-20 07:28:33.623629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.515 qpair failed and we were unable to recover it. 00:30:11.515 [2024-11-20 07:28:33.633591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.515 [2024-11-20 07:28:33.633687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.515 [2024-11-20 07:28:33.633704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.515 [2024-11-20 07:28:33.633712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.515 [2024-11-20 07:28:33.633718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.515 [2024-11-20 07:28:33.633733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.515 qpair failed and we were unable to recover it. 00:30:11.515 [2024-11-20 07:28:33.643595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.515 [2024-11-20 07:28:33.643646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.515 [2024-11-20 07:28:33.643659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.515 [2024-11-20 07:28:33.643667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.515 [2024-11-20 07:28:33.643673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.515 [2024-11-20 07:28:33.643688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.515 qpair failed and we were unable to recover it. 00:30:11.515 [2024-11-20 07:28:33.653504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.515 [2024-11-20 07:28:33.653555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.515 [2024-11-20 07:28:33.653569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.515 [2024-11-20 07:28:33.653576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.515 [2024-11-20 07:28:33.653583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.515 [2024-11-20 07:28:33.653598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.515 qpair failed and we were unable to recover it. 00:30:11.515 [2024-11-20 07:28:33.663680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.515 [2024-11-20 07:28:33.663734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.515 [2024-11-20 07:28:33.663747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.515 [2024-11-20 07:28:33.663754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.515 [2024-11-20 07:28:33.663761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.515 [2024-11-20 07:28:33.663775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.515 qpair failed and we were unable to recover it. 00:30:11.515 [2024-11-20 07:28:33.673659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.515 [2024-11-20 07:28:33.673707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.515 [2024-11-20 07:28:33.673721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.515 [2024-11-20 07:28:33.673731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.515 [2024-11-20 07:28:33.673738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.515 [2024-11-20 07:28:33.673753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.515 qpair failed and we were unable to recover it. 00:30:11.515 [2024-11-20 07:28:33.683666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.515 [2024-11-20 07:28:33.683714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.515 [2024-11-20 07:28:33.683727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.515 [2024-11-20 07:28:33.683735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.515 [2024-11-20 07:28:33.683741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.515 [2024-11-20 07:28:33.683756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.515 qpair failed and we were unable to recover it. 00:30:11.515 [2024-11-20 07:28:33.693719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.515 [2024-11-20 07:28:33.693770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.515 [2024-11-20 07:28:33.693783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.515 [2024-11-20 07:28:33.693791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.515 [2024-11-20 07:28:33.693797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.515 [2024-11-20 07:28:33.693812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.515 qpair failed and we were unable to recover it. 00:30:11.515 [2024-11-20 07:28:33.703785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.515 [2024-11-20 07:28:33.703843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.515 [2024-11-20 07:28:33.703856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.515 [2024-11-20 07:28:33.703863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.515 [2024-11-20 07:28:33.703870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.515 [2024-11-20 07:28:33.703885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.515 qpair failed and we were unable to recover it. 00:30:11.515 [2024-11-20 07:28:33.713784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.515 [2024-11-20 07:28:33.713831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.515 [2024-11-20 07:28:33.713844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.515 [2024-11-20 07:28:33.713852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.515 [2024-11-20 07:28:33.713858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.515 [2024-11-20 07:28:33.713876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.515 qpair failed and we were unable to recover it. 00:30:11.516 [2024-11-20 07:28:33.723663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.516 [2024-11-20 07:28:33.723717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.516 [2024-11-20 07:28:33.723730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.516 [2024-11-20 07:28:33.723738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.516 [2024-11-20 07:28:33.723744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.516 [2024-11-20 07:28:33.723759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.516 qpair failed and we were unable to recover it. 00:30:11.516 [2024-11-20 07:28:33.733816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.516 [2024-11-20 07:28:33.733884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.516 [2024-11-20 07:28:33.733898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.516 [2024-11-20 07:28:33.733906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.516 [2024-11-20 07:28:33.733913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.516 [2024-11-20 07:28:33.733932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.516 qpair failed and we were unable to recover it. 00:30:11.516 [2024-11-20 07:28:33.743853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.516 [2024-11-20 07:28:33.743956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.516 [2024-11-20 07:28:33.743970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.516 [2024-11-20 07:28:33.743978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.516 [2024-11-20 07:28:33.743985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.516 [2024-11-20 07:28:33.744000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.516 qpair failed and we were unable to recover it. 00:30:11.516 [2024-11-20 07:28:33.753876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.516 [2024-11-20 07:28:33.753933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.516 [2024-11-20 07:28:33.753946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.516 [2024-11-20 07:28:33.753954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.516 [2024-11-20 07:28:33.753961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.516 [2024-11-20 07:28:33.753975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.516 qpair failed and we were unable to recover it. 00:30:11.516 [2024-11-20 07:28:33.763904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.516 [2024-11-20 07:28:33.763954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.516 [2024-11-20 07:28:33.763968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.516 [2024-11-20 07:28:33.763975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.516 [2024-11-20 07:28:33.763982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.516 [2024-11-20 07:28:33.763996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.516 qpair failed and we were unable to recover it. 00:30:11.516 [2024-11-20 07:28:33.773953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.516 [2024-11-20 07:28:33.774004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.516 [2024-11-20 07:28:33.774017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.516 [2024-11-20 07:28:33.774025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.516 [2024-11-20 07:28:33.774031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.516 [2024-11-20 07:28:33.774046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.516 qpair failed and we were unable to recover it. 00:30:11.516 [2024-11-20 07:28:33.784005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.516 [2024-11-20 07:28:33.784061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.516 [2024-11-20 07:28:33.784074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.516 [2024-11-20 07:28:33.784081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.516 [2024-11-20 07:28:33.784088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.516 [2024-11-20 07:28:33.784102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.516 qpair failed and we were unable to recover it. 00:30:11.777 [2024-11-20 07:28:33.793987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.777 [2024-11-20 07:28:33.794037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.777 [2024-11-20 07:28:33.794051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.777 [2024-11-20 07:28:33.794058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.777 [2024-11-20 07:28:33.794065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.777 [2024-11-20 07:28:33.794079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.777 qpair failed and we were unable to recover it. 00:30:11.778 [2024-11-20 07:28:33.804004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.778 [2024-11-20 07:28:33.804053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.778 [2024-11-20 07:28:33.804067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.778 [2024-11-20 07:28:33.804077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.778 [2024-11-20 07:28:33.804084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.778 [2024-11-20 07:28:33.804099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.778 qpair failed and we were unable to recover it. 00:30:11.778 [2024-11-20 07:28:33.814007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.778 [2024-11-20 07:28:33.814053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.778 [2024-11-20 07:28:33.814066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.778 [2024-11-20 07:28:33.814074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.778 [2024-11-20 07:28:33.814080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.778 [2024-11-20 07:28:33.814095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.778 qpair failed and we were unable to recover it. 00:30:11.778 [2024-11-20 07:28:33.824125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.778 [2024-11-20 07:28:33.824183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.778 [2024-11-20 07:28:33.824196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.778 [2024-11-20 07:28:33.824203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.778 [2024-11-20 07:28:33.824210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.778 [2024-11-20 07:28:33.824225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.778 qpair failed and we were unable to recover it. 00:30:11.778 [2024-11-20 07:28:33.834099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.778 [2024-11-20 07:28:33.834202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.778 [2024-11-20 07:28:33.834216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.778 [2024-11-20 07:28:33.834223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.778 [2024-11-20 07:28:33.834230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.778 [2024-11-20 07:28:33.834244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.778 qpair failed and we were unable to recover it. 00:30:11.778 [2024-11-20 07:28:33.844132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.778 [2024-11-20 07:28:33.844182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.778 [2024-11-20 07:28:33.844195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.778 [2024-11-20 07:28:33.844202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.778 [2024-11-20 07:28:33.844209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.778 [2024-11-20 07:28:33.844227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.778 qpair failed and we were unable to recover it. 00:30:11.778 [2024-11-20 07:28:33.854150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.778 [2024-11-20 07:28:33.854203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.778 [2024-11-20 07:28:33.854216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.778 [2024-11-20 07:28:33.854223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.778 [2024-11-20 07:28:33.854230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.778 [2024-11-20 07:28:33.854244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.778 qpair failed and we were unable to recover it. 00:30:11.778 [2024-11-20 07:28:33.864221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.778 [2024-11-20 07:28:33.864274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.778 [2024-11-20 07:28:33.864287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.778 [2024-11-20 07:28:33.864294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.778 [2024-11-20 07:28:33.864300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.778 [2024-11-20 07:28:33.864315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.778 qpair failed and we were unable to recover it. 00:30:11.778 [2024-11-20 07:28:33.874196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.778 [2024-11-20 07:28:33.874252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.778 [2024-11-20 07:28:33.874265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.778 [2024-11-20 07:28:33.874272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.778 [2024-11-20 07:28:33.874278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.778 [2024-11-20 07:28:33.874293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.778 qpair failed and we were unable to recover it. 00:30:11.778 [2024-11-20 07:28:33.884209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.778 [2024-11-20 07:28:33.884284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.778 [2024-11-20 07:28:33.884297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.778 [2024-11-20 07:28:33.884304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.778 [2024-11-20 07:28:33.884311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.778 [2024-11-20 07:28:33.884325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.778 qpair failed and we were unable to recover it. 00:30:11.778 [2024-11-20 07:28:33.894251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.778 [2024-11-20 07:28:33.894301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.778 [2024-11-20 07:28:33.894315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.778 [2024-11-20 07:28:33.894322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.778 [2024-11-20 07:28:33.894328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.778 [2024-11-20 07:28:33.894343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.778 qpair failed and we were unable to recover it. 00:30:11.778 [2024-11-20 07:28:33.904317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.778 [2024-11-20 07:28:33.904368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.778 [2024-11-20 07:28:33.904382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.778 [2024-11-20 07:28:33.904389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.778 [2024-11-20 07:28:33.904396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.778 [2024-11-20 07:28:33.904411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.778 qpair failed and we were unable to recover it. 00:30:11.778 [2024-11-20 07:28:33.914317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.778 [2024-11-20 07:28:33.914370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.778 [2024-11-20 07:28:33.914383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.778 [2024-11-20 07:28:33.914391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.778 [2024-11-20 07:28:33.914397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.778 [2024-11-20 07:28:33.914412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.778 qpair failed and we were unable to recover it. 00:30:11.778 [2024-11-20 07:28:33.924211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.778 [2024-11-20 07:28:33.924265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.778 [2024-11-20 07:28:33.924279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.778 [2024-11-20 07:28:33.924287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.778 [2024-11-20 07:28:33.924293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.778 [2024-11-20 07:28:33.924314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.779 qpair failed and we were unable to recover it. 00:30:11.779 [2024-11-20 07:28:33.934351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.779 [2024-11-20 07:28:33.934401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.779 [2024-11-20 07:28:33.934418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.779 [2024-11-20 07:28:33.934425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.779 [2024-11-20 07:28:33.934432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.779 [2024-11-20 07:28:33.934447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.779 qpair failed and we were unable to recover it. 00:30:11.779 [2024-11-20 07:28:33.944419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.779 [2024-11-20 07:28:33.944474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.779 [2024-11-20 07:28:33.944487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.779 [2024-11-20 07:28:33.944495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.779 [2024-11-20 07:28:33.944501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.779 [2024-11-20 07:28:33.944515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.779 qpair failed and we were unable to recover it. 00:30:11.779 [2024-11-20 07:28:33.954422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.779 [2024-11-20 07:28:33.954505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.779 [2024-11-20 07:28:33.954518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.779 [2024-11-20 07:28:33.954525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.779 [2024-11-20 07:28:33.954533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.779 [2024-11-20 07:28:33.954547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.779 qpair failed and we were unable to recover it. 00:30:11.779 [2024-11-20 07:28:33.964444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.779 [2024-11-20 07:28:33.964529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.779 [2024-11-20 07:28:33.964543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.779 [2024-11-20 07:28:33.964551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.779 [2024-11-20 07:28:33.964558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.779 [2024-11-20 07:28:33.964572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.779 qpair failed and we were unable to recover it. 00:30:11.779 [2024-11-20 07:28:33.974468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.779 [2024-11-20 07:28:33.974530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.779 [2024-11-20 07:28:33.974543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.779 [2024-11-20 07:28:33.974550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.779 [2024-11-20 07:28:33.974560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.779 [2024-11-20 07:28:33.974575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.779 qpair failed and we were unable to recover it. 00:30:11.779 [2024-11-20 07:28:33.984559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.779 [2024-11-20 07:28:33.984618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.779 [2024-11-20 07:28:33.984631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.779 [2024-11-20 07:28:33.984638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.779 [2024-11-20 07:28:33.984645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6570000b90 00:30:11.779 [2024-11-20 07:28:33.984659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.779 qpair failed and we were unable to recover it. 00:30:11.779 [2024-11-20 07:28:33.994539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.779 [2024-11-20 07:28:33.994639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.779 [2024-11-20 07:28:33.994703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.779 [2024-11-20 07:28:33.994729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.779 [2024-11-20 07:28:33.994750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6564000b90 00:30:11.779 [2024-11-20 07:28:33.994805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.779 qpair failed and we were unable to recover it. 00:30:11.779 [2024-11-20 07:28:34.004538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.779 [2024-11-20 07:28:34.004611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.779 [2024-11-20 07:28:34.004642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.779 [2024-11-20 07:28:34.004658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.779 [2024-11-20 07:28:34.004673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6564000b90 00:30:11.779 [2024-11-20 07:28:34.004704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.779 qpair failed and we were unable to recover it. 00:30:11.779 [2024-11-20 07:28:34.014574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.779 [2024-11-20 07:28:34.014636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.779 [2024-11-20 07:28:34.014655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.779 [2024-11-20 07:28:34.014666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.779 [2024-11-20 07:28:34.014675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6564000b90 00:30:11.779 [2024-11-20 07:28:34.014697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.779 qpair failed and we were unable to recover it. 00:30:11.779 [2024-11-20 07:28:34.014883] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:11.779 A controller has encountered a failure and is being reset. 00:30:11.779 [2024-11-20 07:28:34.015000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c1e00 (9): Bad file descriptor 00:30:12.039 Controller properly reset. 00:30:12.039 Initializing NVMe Controllers 00:30:12.039 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:12.039 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:12.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:12.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:12.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:12.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:12.039 Initialization complete. Launching workers. 00:30:12.039 Starting thread on core 1 00:30:12.039 Starting thread on core 2 00:30:12.039 Starting thread on core 3 00:30:12.039 Starting thread on core 0 00:30:12.039 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:12.039 00:30:12.039 real 0m11.613s 00:30:12.039 user 0m21.771s 00:30:12.039 sys 0m3.970s 00:30:12.039 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:12.039 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.039 ************************************ 00:30:12.039 END TEST nvmf_target_disconnect_tc2 00:30:12.039 ************************************ 00:30:12.039 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:12.039 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:12.039 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:12.039 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:12.040 rmmod nvme_tcp 00:30:12.040 rmmod nvme_fabrics 00:30:12.040 rmmod nvme_keyring 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3710358 ']' 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3710358 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3710358 ']' 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 3710358 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:12.040 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3710358 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3710358' 00:30:12.300 killing process with pid 3710358 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 3710358 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 3710358 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.300 07:28:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.845 07:28:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:14.845 00:30:14.845 real 0m22.062s 00:30:14.845 user 0m50.228s 00:30:14.845 sys 0m10.240s 00:30:14.845 07:28:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:14.845 07:28:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:14.845 ************************************ 00:30:14.845 END TEST nvmf_target_disconnect 00:30:14.845 ************************************ 00:30:14.845 07:28:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:14.845 00:30:14.845 real 6m33.895s 00:30:14.845 user 11m26.298s 00:30:14.845 sys 2m16.364s 00:30:14.845 07:28:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:14.845 07:28:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.845 ************************************ 00:30:14.845 END TEST nvmf_host 00:30:14.845 ************************************ 00:30:14.845 07:28:36 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:14.845 07:28:36 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:14.845 07:28:36 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:14.845 07:28:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:14.845 07:28:36 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:14.845 07:28:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:14.845 ************************************ 00:30:14.845 START TEST nvmf_target_core_interrupt_mode 00:30:14.845 ************************************ 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:14.845 * Looking for test storage... 00:30:14.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:14.845 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:14.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.846 --rc genhtml_branch_coverage=1 00:30:14.846 --rc genhtml_function_coverage=1 00:30:14.846 --rc genhtml_legend=1 00:30:14.846 --rc geninfo_all_blocks=1 00:30:14.846 --rc geninfo_unexecuted_blocks=1 00:30:14.846 00:30:14.846 ' 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:14.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.846 --rc genhtml_branch_coverage=1 00:30:14.846 --rc genhtml_function_coverage=1 00:30:14.846 --rc genhtml_legend=1 00:30:14.846 --rc geninfo_all_blocks=1 00:30:14.846 --rc geninfo_unexecuted_blocks=1 00:30:14.846 00:30:14.846 ' 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:14.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.846 --rc genhtml_branch_coverage=1 00:30:14.846 --rc genhtml_function_coverage=1 00:30:14.846 --rc genhtml_legend=1 00:30:14.846 --rc geninfo_all_blocks=1 00:30:14.846 --rc geninfo_unexecuted_blocks=1 00:30:14.846 00:30:14.846 ' 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:14.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.846 --rc genhtml_branch_coverage=1 00:30:14.846 --rc genhtml_function_coverage=1 00:30:14.846 --rc genhtml_legend=1 00:30:14.846 --rc geninfo_all_blocks=1 00:30:14.846 --rc geninfo_unexecuted_blocks=1 00:30:14.846 00:30:14.846 ' 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:14.846 ************************************ 00:30:14.846 START TEST nvmf_abort 00:30:14.846 ************************************ 00:30:14.846 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:14.846 * Looking for test storage... 00:30:14.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:14.846 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:14.846 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:30:14.846 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:15.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.109 --rc genhtml_branch_coverage=1 00:30:15.109 --rc genhtml_function_coverage=1 00:30:15.109 --rc genhtml_legend=1 00:30:15.109 --rc geninfo_all_blocks=1 00:30:15.109 --rc geninfo_unexecuted_blocks=1 00:30:15.109 00:30:15.109 ' 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:15.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.109 --rc genhtml_branch_coverage=1 00:30:15.109 --rc genhtml_function_coverage=1 00:30:15.109 --rc genhtml_legend=1 00:30:15.109 --rc geninfo_all_blocks=1 00:30:15.109 --rc geninfo_unexecuted_blocks=1 00:30:15.109 00:30:15.109 ' 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:15.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.109 --rc genhtml_branch_coverage=1 00:30:15.109 --rc genhtml_function_coverage=1 00:30:15.109 --rc genhtml_legend=1 00:30:15.109 --rc geninfo_all_blocks=1 00:30:15.109 --rc geninfo_unexecuted_blocks=1 00:30:15.109 00:30:15.109 ' 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:15.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.109 --rc genhtml_branch_coverage=1 00:30:15.109 --rc genhtml_function_coverage=1 00:30:15.109 --rc genhtml_legend=1 00:30:15.109 --rc geninfo_all_blocks=1 00:30:15.109 --rc geninfo_unexecuted_blocks=1 00:30:15.109 00:30:15.109 ' 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.109 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:15.110 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:23.253 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.253 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:23.253 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:23.254 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:23.254 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:23.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:30:23.254 00:30:23.254 --- 10.0.0.2 ping statistics --- 00:30:23.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.254 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:30:23.254 00:30:23.254 --- 10.0.0.1 ping statistics --- 00:30:23.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.254 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3715992 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3715992 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3715992 ']' 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.254 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:23.255 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:23.255 [2024-11-20 07:28:44.834436] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:23.255 [2024-11-20 07:28:44.835941] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:30:23.255 [2024-11-20 07:28:44.836013] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.255 [2024-11-20 07:28:44.938250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:23.255 [2024-11-20 07:28:44.989857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.255 [2024-11-20 07:28:44.989907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.255 [2024-11-20 07:28:44.989916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.255 [2024-11-20 07:28:44.989923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.255 [2024-11-20 07:28:44.989930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.255 [2024-11-20 07:28:44.991790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:23.255 [2024-11-20 07:28:44.991954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.255 [2024-11-20 07:28:44.991956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:23.255 [2024-11-20 07:28:45.068502] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:23.255 [2024-11-20 07:28:45.069726] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:23.255 [2024-11-20 07:28:45.070070] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:23.255 [2024-11-20 07:28:45.070226] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:23.515 [2024-11-20 07:28:45.692832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:23.515 Malloc0 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:23.515 Delay0 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.515 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:23.776 [2024-11-20 07:28:45.792811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.776 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.776 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:23.776 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.776 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:23.776 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.776 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:23.776 [2024-11-20 07:28:45.933974] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:26.320 Initializing NVMe Controllers 00:30:26.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:26.320 controller IO queue size 128 less than required 00:30:26.320 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:26.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:26.320 Initialization complete. Launching workers. 00:30:26.320 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28657 00:30:26.320 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28718, failed to submit 66 00:30:26.320 success 28657, unsuccessful 61, failed 0 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:26.320 rmmod nvme_tcp 00:30:26.320 rmmod nvme_fabrics 00:30:26.320 rmmod nvme_keyring 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3715992 ']' 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3715992 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3715992 ']' 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3715992 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3715992 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3715992' 00:30:26.320 killing process with pid 3715992 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3715992 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3715992 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:26.320 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:26.321 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:26.321 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:26.321 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:26.321 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.321 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.321 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.233 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:28.233 00:30:28.233 real 0m13.438s 00:30:28.233 user 0m10.959s 00:30:28.233 sys 0m7.025s 00:30:28.233 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:28.233 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:28.233 ************************************ 00:30:28.233 END TEST nvmf_abort 00:30:28.233 ************************************ 00:30:28.233 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:28.233 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:28.233 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:28.233 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:28.494 ************************************ 00:30:28.494 START TEST nvmf_ns_hotplug_stress 00:30:28.494 ************************************ 00:30:28.494 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:28.494 * Looking for test storage... 00:30:28.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:28.494 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:28.494 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:28.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.495 --rc genhtml_branch_coverage=1 00:30:28.495 --rc genhtml_function_coverage=1 00:30:28.495 --rc genhtml_legend=1 00:30:28.495 --rc geninfo_all_blocks=1 00:30:28.495 --rc geninfo_unexecuted_blocks=1 00:30:28.495 00:30:28.495 ' 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:28.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.495 --rc genhtml_branch_coverage=1 00:30:28.495 --rc genhtml_function_coverage=1 00:30:28.495 --rc genhtml_legend=1 00:30:28.495 --rc geninfo_all_blocks=1 00:30:28.495 --rc geninfo_unexecuted_blocks=1 00:30:28.495 00:30:28.495 ' 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:28.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.495 --rc genhtml_branch_coverage=1 00:30:28.495 --rc genhtml_function_coverage=1 00:30:28.495 --rc genhtml_legend=1 00:30:28.495 --rc geninfo_all_blocks=1 00:30:28.495 --rc geninfo_unexecuted_blocks=1 00:30:28.495 00:30:28.495 ' 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:28.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.495 --rc genhtml_branch_coverage=1 00:30:28.495 --rc genhtml_function_coverage=1 00:30:28.495 --rc genhtml_legend=1 00:30:28.495 --rc geninfo_all_blocks=1 00:30:28.495 --rc geninfo_unexecuted_blocks=1 00:30:28.495 00:30:28.495 ' 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:28.495 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:28.496 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:36.635 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.635 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:36.636 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:36.636 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:36.636 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:36.636 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:36.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:36.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:30:36.636 00:30:36.636 --- 10.0.0.2 ping statistics --- 00:30:36.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.636 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:36.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:36.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:30:36.636 00:30:36.636 --- 10.0.0.1 ping statistics --- 00:30:36.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.636 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3720797 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3720797 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3720797 ']' 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:36.636 07:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:36.637 [2024-11-20 07:28:58.325135] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:36.637 [2024-11-20 07:28:58.326255] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:30:36.637 [2024-11-20 07:28:58.326305] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.637 [2024-11-20 07:28:58.426171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:36.637 [2024-11-20 07:28:58.477304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.637 [2024-11-20 07:28:58.477354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.637 [2024-11-20 07:28:58.477362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:36.637 [2024-11-20 07:28:58.477370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:36.637 [2024-11-20 07:28:58.477376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.637 [2024-11-20 07:28:58.479185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:36.637 [2024-11-20 07:28:58.479332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:36.637 [2024-11-20 07:28:58.479470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.637 [2024-11-20 07:28:58.555614] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:36.637 [2024-11-20 07:28:58.556716] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:36.637 [2024-11-20 07:28:58.557146] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:36.637 [2024-11-20 07:28:58.557303] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:36.897 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:36.897 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:30:36.897 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:36.897 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:36.897 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:37.158 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.158 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:37.158 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:37.158 [2024-11-20 07:28:59.352556] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.158 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:37.419 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:37.679 [2024-11-20 07:28:59.721513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.679 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:37.679 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:37.940 Malloc0 00:30:37.940 07:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:38.200 Delay0 00:30:38.200 07:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.461 07:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:38.461 NULL1 00:30:38.461 07:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:38.722 07:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3721229 00:30:38.722 07:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:38.722 07:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:38.722 07:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.984 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.245 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:39.245 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:39.245 true 00:30:39.245 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:39.245 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.506 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.766 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:39.766 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:40.027 true 00:30:40.027 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:40.027 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.288 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.288 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:40.288 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:40.549 true 00:30:40.549 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:40.549 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.810 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.070 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:41.070 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:41.070 true 00:30:41.070 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:41.071 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.332 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.593 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:41.593 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:41.593 true 00:30:41.853 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:41.853 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.853 07:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.113 07:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:42.113 07:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:42.374 true 00:30:42.374 07:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:42.374 07:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.634 07:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.634 07:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:42.634 07:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:42.894 true 00:30:42.894 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:42.894 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.155 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.155 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:43.155 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:43.415 true 00:30:43.415 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:43.415 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.677 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.677 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:43.677 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:43.937 true 00:30:43.937 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:43.937 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.196 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.457 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:44.457 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:44.457 true 00:30:44.457 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:44.457 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.718 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.979 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:44.979 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:44.979 true 00:30:45.239 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:45.239 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.239 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.500 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:45.500 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:45.760 true 00:30:45.760 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:45.760 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.760 07:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.027 07:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:46.027 07:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:46.355 true 00:30:46.355 07:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:46.355 07:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.355 07:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.638 07:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:46.638 07:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:46.638 true 00:30:46.899 07:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:46.899 07:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.899 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.159 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:47.159 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:47.419 true 00:30:47.419 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:47.419 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.680 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.680 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:47.680 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:47.942 true 00:30:47.942 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:47.942 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.204 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.204 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:48.204 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:48.465 true 00:30:48.465 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:48.465 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.726 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.987 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:48.987 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:48.987 true 00:30:48.987 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:48.987 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.248 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.509 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:49.509 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:49.509 true 00:30:49.509 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:49.509 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.770 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.031 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:50.031 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:50.031 true 00:30:50.291 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:50.291 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.291 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.551 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:50.551 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:50.811 true 00:30:50.811 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:50.811 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.811 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.071 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:51.071 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:51.331 true 00:30:51.331 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:51.331 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.591 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.591 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:51.591 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:51.849 true 00:30:51.849 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:51.849 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.109 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.109 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:52.109 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:52.368 true 00:30:52.368 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:52.368 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.627 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.887 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:52.887 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:52.887 true 00:30:52.887 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:52.887 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.147 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.408 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:53.408 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:53.408 true 00:30:53.408 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:53.408 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.669 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.931 07:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:53.931 07:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:54.192 true 00:30:54.192 07:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:54.192 07:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.192 07:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.453 07:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:54.453 07:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:54.715 true 00:30:54.715 07:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:54.715 07:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.976 07:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.976 07:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:54.976 07:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:55.237 true 00:30:55.237 07:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:55.237 07:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.497 07:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.497 07:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:55.497 07:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:55.758 true 00:30:55.758 07:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:55.758 07:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.020 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.020 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:56.020 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:56.280 true 00:30:56.280 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:56.280 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.541 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.541 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:56.541 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:56.802 true 00:30:56.802 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:56.802 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.071 07:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.334 07:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:57.334 07:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:57.334 true 00:30:57.334 07:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:57.335 07:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.596 07:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.856 07:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:57.856 07:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:57.856 true 00:30:57.856 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:57.856 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.117 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.377 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:58.377 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:58.377 true 00:30:58.377 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:58.377 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.639 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.900 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:58.900 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:58.900 true 00:30:59.162 07:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:59.162 07:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.162 07:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.424 07:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:59.424 07:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:59.424 true 00:30:59.685 07:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:30:59.685 07:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.685 07:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.945 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:59.945 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:31:00.206 true 00:31:00.206 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:00.206 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.206 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.466 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:31:00.466 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:31:00.727 true 00:31:00.727 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:00.727 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.988 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.988 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:31:00.988 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:31:01.249 true 00:31:01.249 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:01.249 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.511 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.511 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:31:01.511 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:31:01.771 true 00:31:01.771 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:01.771 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.030 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.292 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:31:02.292 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:31:02.292 true 00:31:02.292 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:02.292 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.553 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.814 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:31:02.815 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:31:02.815 true 00:31:02.815 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:02.815 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.075 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.337 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:31:03.337 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:31:03.337 true 00:31:03.599 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:03.599 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.599 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.860 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:31:03.860 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:31:04.122 true 00:31:04.122 07:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:04.122 07:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.122 07:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.383 07:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:31:04.383 07:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:31:04.644 true 00:31:04.644 07:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:04.644 07:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.905 07:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.905 07:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:31:04.905 07:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:31:05.165 true 00:31:05.165 07:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:05.165 07:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.426 07:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.426 07:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:31:05.426 07:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:31:05.687 true 00:31:05.687 07:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:05.687 07:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.948 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.948 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:31:05.948 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:31:06.209 true 00:31:06.209 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:06.209 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.469 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.729 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:31:06.729 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:31:06.729 true 00:31:06.729 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:06.729 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.989 07:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.249 07:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:31:07.249 07:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:31:07.249 true 00:31:07.249 07:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:07.249 07:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.508 07:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.768 07:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:31:07.768 07:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:31:08.030 true 00:31:08.030 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:08.030 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.030 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.291 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:31:08.291 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:31:08.550 true 00:31:08.550 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:08.550 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.550 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.810 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:31:08.810 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:31:09.070 Initializing NVMe Controllers 00:31:09.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:09.070 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:31:09.070 Controller IO queue size 128, less than required. 00:31:09.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:09.070 WARNING: Some requested NVMe devices were skipped 00:31:09.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:09.070 Initialization complete. Launching workers. 00:31:09.070 ======================================================== 00:31:09.070 Latency(us) 00:31:09.070 Device Information : IOPS MiB/s Average min max 00:31:09.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30366.38 14.83 4215.15 1107.48 11224.85 00:31:09.070 ======================================================== 00:31:09.070 Total : 30366.38 14.83 4215.15 1107.48 11224.85 00:31:09.070 00:31:09.070 true 00:31:09.070 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3721229 00:31:09.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3721229) - No such process 00:31:09.070 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3721229 00:31:09.070 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.070 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:09.331 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:09.331 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:09.331 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:09.331 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:09.331 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:09.592 null0 00:31:09.592 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:09.592 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:09.592 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:09.592 null1 00:31:09.592 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:09.592 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:09.592 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:09.852 null2 00:31:09.852 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:09.853 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:09.853 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:10.113 null3 00:31:10.113 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:10.113 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:10.113 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:10.113 null4 00:31:10.113 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:10.113 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:10.113 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:10.374 null5 00:31:10.374 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:10.374 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:10.374 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:10.374 null6 00:31:10.374 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:10.374 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:10.374 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:10.636 null7 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.636 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3727502 3727504 3727507 3727509 3727512 3727515 3727518 3727521 00:31:10.637 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:10.898 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.898 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:10.899 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:10.899 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:10.899 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:10.899 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:10.899 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:10.899 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:11.160 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:11.161 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:11.422 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:11.422 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:11.422 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:11.422 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:11.422 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.422 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.422 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:11.422 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.422 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.422 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:11.422 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.422 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.423 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.684 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:11.945 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.945 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.945 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:11.945 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.945 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.945 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:11.945 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.945 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.945 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:11.945 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.945 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.945 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:11.945 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.945 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:11.945 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:11.945 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:11.945 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:11.945 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:11.945 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.205 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:12.206 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.206 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.206 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:12.206 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.467 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:12.468 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.468 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.468 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:12.468 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.468 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.468 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:12.730 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.730 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.730 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.730 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:12.730 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:12.730 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:12.730 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:12.730 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:12.730 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:12.730 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:12.730 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.730 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.730 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:12.991 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.253 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:13.514 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.514 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.514 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:13.514 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:13.514 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.514 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.514 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:13.514 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:13.514 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:13.514 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:13.514 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:13.514 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:13.514 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:13.775 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:13.775 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:13.775 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:13.775 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.037 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:14.298 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.298 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.298 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:14.298 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:14.298 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:14.298 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:14.298 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.298 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.298 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:14.298 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:14.298 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:14.298 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:14.298 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.298 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:14.560 rmmod nvme_tcp 00:31:14.560 rmmod nvme_fabrics 00:31:14.560 rmmod nvme_keyring 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3720797 ']' 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3720797 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3720797 ']' 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3720797 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:14.560 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3720797 00:31:14.821 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:14.821 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:14.821 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3720797' 00:31:14.821 killing process with pid 3720797 00:31:14.821 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3720797 00:31:14.821 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3720797 00:31:14.821 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:14.821 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:14.821 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:14.821 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:14.821 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:14.821 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:14.821 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:14.821 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:14.821 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:14.821 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.821 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.821 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.370 00:31:17.370 real 0m48.562s 00:31:17.370 user 3m1.847s 00:31:17.370 sys 0m22.087s 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:17.370 ************************************ 00:31:17.370 END TEST nvmf_ns_hotplug_stress 00:31:17.370 ************************************ 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:17.370 ************************************ 00:31:17.370 START TEST nvmf_delete_subsystem 00:31:17.370 ************************************ 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:17.370 * Looking for test storage... 00:31:17.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:17.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.370 --rc genhtml_branch_coverage=1 00:31:17.370 --rc genhtml_function_coverage=1 00:31:17.370 --rc genhtml_legend=1 00:31:17.370 --rc geninfo_all_blocks=1 00:31:17.370 --rc geninfo_unexecuted_blocks=1 00:31:17.370 00:31:17.370 ' 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:17.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.370 --rc genhtml_branch_coverage=1 00:31:17.370 --rc genhtml_function_coverage=1 00:31:17.370 --rc genhtml_legend=1 00:31:17.370 --rc geninfo_all_blocks=1 00:31:17.370 --rc geninfo_unexecuted_blocks=1 00:31:17.370 00:31:17.370 ' 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:17.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.370 --rc genhtml_branch_coverage=1 00:31:17.370 --rc genhtml_function_coverage=1 00:31:17.370 --rc genhtml_legend=1 00:31:17.370 --rc geninfo_all_blocks=1 00:31:17.370 --rc geninfo_unexecuted_blocks=1 00:31:17.370 00:31:17.370 ' 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:17.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.370 --rc genhtml_branch_coverage=1 00:31:17.370 --rc genhtml_function_coverage=1 00:31:17.370 --rc genhtml_legend=1 00:31:17.370 --rc geninfo_all_blocks=1 00:31:17.370 --rc geninfo_unexecuted_blocks=1 00:31:17.370 00:31:17.370 ' 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.370 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:17.371 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:25.592 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:25.593 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:25.593 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:25.593 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:25.593 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:25.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:31:25.593 00:31:25.593 --- 10.0.0.2 ping statistics --- 00:31:25.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.593 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:25.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:31:25.593 00:31:25.593 --- 10.0.0.1 ping statistics --- 00:31:25.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.593 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.593 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3732537 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3732537 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3732537 ']' 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:25.594 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:25.594 [2024-11-20 07:29:46.866639] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:25.594 [2024-11-20 07:29:46.867744] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:31:25.594 [2024-11-20 07:29:46.867795] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.594 [2024-11-20 07:29:46.967639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:25.594 [2024-11-20 07:29:47.019779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:25.594 [2024-11-20 07:29:47.019833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:25.594 [2024-11-20 07:29:47.019842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:25.594 [2024-11-20 07:29:47.019849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:25.594 [2024-11-20 07:29:47.019855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:25.594 [2024-11-20 07:29:47.021462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.594 [2024-11-20 07:29:47.021466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.594 [2024-11-20 07:29:47.098433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:25.594 [2024-11-20 07:29:47.099202] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:25.594 [2024-11-20 07:29:47.099411] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:25.594 [2024-11-20 07:29:47.734508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:25.594 [2024-11-20 07:29:47.767091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:25.594 NULL1 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:25.594 Delay0 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3732837 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:25.594 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:25.856 [2024-11-20 07:29:47.895873] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:27.773 07:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:27.773 07:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.773 07:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 [2024-11-20 07:29:50.103927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2479680 is same with the state(6) to be set 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 starting I/O failed: -6 00:31:28.035 Read completed with error (sct=0, sc=8) 00:31:28.035 Write completed with error (sct=0, sc=8) 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 starting I/O failed: -6 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 starting I/O failed: -6 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 starting I/O failed: -6 00:31:28.036 [2024-11-20 07:29:50.106882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6ef000d490 is same with the state(6) to be set 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Read completed with error (sct=0, sc=8) 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.036 Write completed with error (sct=0, sc=8) 00:31:28.978 [2024-11-20 07:29:51.077860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247a9a0 is same with the state(6) to be set 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Write completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Write completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Write completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Write completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Write completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Write completed with error (sct=0, sc=8) 00:31:28.978 Write completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Write completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 [2024-11-20 07:29:51.107506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24794a0 is same with the state(6) to be set 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Write completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Write completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Write completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.978 Write completed with error (sct=0, sc=8) 00:31:28.978 Write completed with error (sct=0, sc=8) 00:31:28.978 Write completed with error (sct=0, sc=8) 00:31:28.978 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 [2024-11-20 07:29:51.108364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2479860 is same with the state(6) to be set 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Write completed with error (sct=0, sc=8) 00:31:28.979 Write completed with error (sct=0, sc=8) 00:31:28.979 Write completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Write completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Write completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 [2024-11-20 07:29:51.109625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6ef000d7c0 is same with the state(6) to be set 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Write completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Write completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Write completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Write completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 Read completed with error (sct=0, sc=8) 00:31:28.979 [2024-11-20 07:29:51.109715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6ef000d020 is same with the state(6) to be set 00:31:28.979 Initializing NVMe Controllers 00:31:28.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.979 Controller IO queue size 128, less than required. 00:31:28.979 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:28.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:28.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:28.979 Initialization complete. Launching workers. 00:31:28.979 ======================================================== 00:31:28.979 Latency(us) 00:31:28.979 Device Information : IOPS MiB/s Average min max 00:31:28.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.01 0.09 880550.29 243.48 1009734.37 00:31:28.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.62 0.08 923324.45 272.80 1012369.47 00:31:28.979 ======================================================== 00:31:28.979 Total : 334.63 0.16 900698.00 243.48 1012369.47 00:31:28.979 00:31:28.979 [2024-11-20 07:29:51.110409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247a9a0 (9): Bad file descriptor 00:31:28.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:28.979 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.979 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:28.979 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3732837 00:31:28.979 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:29.550 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:29.550 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3732837 00:31:29.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3732837) - No such process 00:31:29.550 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3732837 00:31:29.550 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3732837 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3732837 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:29.551 [2024-11-20 07:29:51.642960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3733514 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3733514 00:31:29.551 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:29.551 [2024-11-20 07:29:51.741143] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:30.123 07:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:30.123 07:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3733514 00:31:30.123 07:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:30.694 07:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:30.694 07:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3733514 00:31:30.694 07:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:30.954 07:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:30.954 07:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3733514 00:31:30.954 07:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:31.527 07:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:31.527 07:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3733514 00:31:31.527 07:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:32.099 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:32.099 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3733514 00:31:32.099 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:32.669 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:32.669 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3733514 00:31:32.669 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:32.669 Initializing NVMe Controllers 00:31:32.669 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:32.669 Controller IO queue size 128, less than required. 00:31:32.669 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:32.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:32.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:32.669 Initialization complete. Launching workers. 00:31:32.669 ======================================================== 00:31:32.669 Latency(us) 00:31:32.669 Device Information : IOPS MiB/s Average min max 00:31:32.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003940.70 1000163.94 1042262.59 00:31:32.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005027.55 1000657.65 1042268.02 00:31:32.669 ======================================================== 00:31:32.669 Total : 256.00 0.12 1004484.13 1000163.94 1042268.02 00:31:32.669 00:31:32.929 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:32.929 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3733514 00:31:32.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3733514) - No such process 00:31:32.929 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3733514 00:31:32.929 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:32.929 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:32.929 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:32.930 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:32.930 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:32.930 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:32.930 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:32.930 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:33.195 rmmod nvme_tcp 00:31:33.195 rmmod nvme_fabrics 00:31:33.195 rmmod nvme_keyring 00:31:33.195 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:33.195 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:33.195 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:33.195 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3732537 ']' 00:31:33.195 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3732537 00:31:33.195 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3732537 ']' 00:31:33.195 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3732537 00:31:33.195 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:31:33.195 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:33.195 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3732537 00:31:33.195 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:33.195 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:33.195 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3732537' 00:31:33.195 killing process with pid 3732537 00:31:33.195 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3732537 00:31:33.196 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3732537 00:31:33.196 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:33.196 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:33.196 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:33.196 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:33.196 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:33.196 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:33.196 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:33.196 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:33.196 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:33.196 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.196 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.196 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:35.837 00:31:35.837 real 0m18.378s 00:31:35.837 user 0m26.912s 00:31:35.837 sys 0m7.397s 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:35.837 ************************************ 00:31:35.837 END TEST nvmf_delete_subsystem 00:31:35.837 ************************************ 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:35.837 ************************************ 00:31:35.837 START TEST nvmf_host_management 00:31:35.837 ************************************ 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:35.837 * Looking for test storage... 00:31:35.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:35.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.837 --rc genhtml_branch_coverage=1 00:31:35.837 --rc genhtml_function_coverage=1 00:31:35.837 --rc genhtml_legend=1 00:31:35.837 --rc geninfo_all_blocks=1 00:31:35.837 --rc geninfo_unexecuted_blocks=1 00:31:35.837 00:31:35.837 ' 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:35.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.837 --rc genhtml_branch_coverage=1 00:31:35.837 --rc genhtml_function_coverage=1 00:31:35.837 --rc genhtml_legend=1 00:31:35.837 --rc geninfo_all_blocks=1 00:31:35.837 --rc geninfo_unexecuted_blocks=1 00:31:35.837 00:31:35.837 ' 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:35.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.837 --rc genhtml_branch_coverage=1 00:31:35.837 --rc genhtml_function_coverage=1 00:31:35.837 --rc genhtml_legend=1 00:31:35.837 --rc geninfo_all_blocks=1 00:31:35.837 --rc geninfo_unexecuted_blocks=1 00:31:35.837 00:31:35.837 ' 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:35.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.837 --rc genhtml_branch_coverage=1 00:31:35.837 --rc genhtml_function_coverage=1 00:31:35.837 --rc genhtml_legend=1 00:31:35.837 --rc geninfo_all_blocks=1 00:31:35.837 --rc geninfo_unexecuted_blocks=1 00:31:35.837 00:31:35.837 ' 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.837 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:35.838 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:43.984 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:43.984 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:43.984 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:43.984 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.984 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:43.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:31:43.985 00:31:43.985 --- 10.0.0.2 ping statistics --- 00:31:43.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.985 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:31:43.985 00:31:43.985 --- 10.0.0.1 ping statistics --- 00:31:43.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.985 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3738580 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3738580 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3738580 ']' 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:43.985 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:43.985 [2024-11-20 07:30:05.457763] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:43.985 [2024-11-20 07:30:05.458885] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:31:43.985 [2024-11-20 07:30:05.458936] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:43.985 [2024-11-20 07:30:05.559426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:43.985 [2024-11-20 07:30:05.612590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:43.985 [2024-11-20 07:30:05.612642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:43.985 [2024-11-20 07:30:05.612652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:43.985 [2024-11-20 07:30:05.612659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:43.985 [2024-11-20 07:30:05.612665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:43.985 [2024-11-20 07:30:05.615042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:43.985 [2024-11-20 07:30:05.615216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:43.985 [2024-11-20 07:30:05.615385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:43.985 [2024-11-20 07:30:05.615387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.985 [2024-11-20 07:30:05.694263] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:43.985 [2024-11-20 07:30:05.695632] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:43.985 [2024-11-20 07:30:05.695693] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:43.985 [2024-11-20 07:30:05.696099] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:43.985 [2024-11-20 07:30:05.696153] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:44.247 [2024-11-20 07:30:06.336341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:44.247 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:44.248 Malloc0 00:31:44.248 [2024-11-20 07:30:06.432632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3738676 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3738676 /var/tmp/bdevperf.sock 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3738676 ']' 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:44.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:44.248 { 00:31:44.248 "params": { 00:31:44.248 "name": "Nvme$subsystem", 00:31:44.248 "trtype": "$TEST_TRANSPORT", 00:31:44.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.248 "adrfam": "ipv4", 00:31:44.248 "trsvcid": "$NVMF_PORT", 00:31:44.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.248 "hdgst": ${hdgst:-false}, 00:31:44.248 "ddgst": ${ddgst:-false} 00:31:44.248 }, 00:31:44.248 "method": "bdev_nvme_attach_controller" 00:31:44.248 } 00:31:44.248 EOF 00:31:44.248 )") 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:44.248 07:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:44.248 "params": { 00:31:44.248 "name": "Nvme0", 00:31:44.248 "trtype": "tcp", 00:31:44.248 "traddr": "10.0.0.2", 00:31:44.248 "adrfam": "ipv4", 00:31:44.248 "trsvcid": "4420", 00:31:44.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:44.248 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:44.248 "hdgst": false, 00:31:44.248 "ddgst": false 00:31:44.248 }, 00:31:44.248 "method": "bdev_nvme_attach_controller" 00:31:44.248 }' 00:31:44.509 [2024-11-20 07:30:06.543408] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:31:44.509 [2024-11-20 07:30:06.543482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738676 ] 00:31:44.509 [2024-11-20 07:30:06.637877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.509 [2024-11-20 07:30:06.693092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.770 Running I/O for 10 seconds... 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=650 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 650 -ge 100 ']' 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.344 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:45.344 [2024-11-20 07:30:07.456215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.344 [2024-11-20 07:30:07.456303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.344 [2024-11-20 07:30:07.456313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.344 [2024-11-20 07:30:07.456322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.344 [2024-11-20 07:30:07.456329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.344 [2024-11-20 07:30:07.456338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.344 [2024-11-20 07:30:07.456345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.344 [2024-11-20 07:30:07.456353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.344 [2024-11-20 07:30:07.456361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.344 [2024-11-20 07:30:07.456368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.344 [2024-11-20 07:30:07.456376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.344 [2024-11-20 07:30:07.456384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.344 [2024-11-20 07:30:07.456398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.344 [2024-11-20 07:30:07.456405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.344 [2024-11-20 07:30:07.456412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82f20 is same with the state(6) to be set 00:31:45.345 [2024-11-20 07:30:07.456865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.456921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.456946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.456956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.456967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.456975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.456985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.456994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.345 [2024-11-20 07:30:07.457339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.345 [2024-11-20 07:30:07.457347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.457990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.457998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.458007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.458015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.346 [2024-11-20 07:30:07.458024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.346 [2024-11-20 07:30:07.458031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.347 [2024-11-20 07:30:07.458042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.347 [2024-11-20 07:30:07.458049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.347 [2024-11-20 07:30:07.458059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.347 [2024-11-20 07:30:07.458067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.347 [2024-11-20 07:30:07.458076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fd0e0 is same with the state(6) to be set 00:31:45.347 [2024-11-20 07:30:07.459408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:45.347 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.347 task offset: 98304 on job bdev=Nvme0n1 fails 00:31:45.347 00:31:45.347 Latency(us) 00:31:45.347 [2024-11-20T06:30:07.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.347 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:45.347 Job: Nvme0n1 ended in about 0.52 seconds with error 00:31:45.347 Verification LBA range: start 0x0 length 0x400 00:31:45.347 Nvme0n1 : 0.52 1372.27 85.77 123.52 0.00 41687.70 4123.31 38229.33 00:31:45.347 [2024-11-20T06:30:07.625Z] =================================================================================================================== 00:31:45.347 [2024-11-20T06:30:07.625Z] Total : 1372.27 85.77 123.52 0.00 41687.70 4123.31 38229.33 00:31:45.347 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:45.347 [2024-11-20 07:30:07.461663] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:45.347 [2024-11-20 07:30:07.461707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e4000 (9): Bad file descriptor 00:31:45.347 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.347 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:45.347 [2024-11-20 07:30:07.463404] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:45.347 [2024-11-20 07:30:07.463500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:45.347 [2024-11-20 07:30:07.463542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.347 [2024-11-20 07:30:07.463558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:45.347 [2024-11-20 07:30:07.463566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:45.347 [2024-11-20 07:30:07.463575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:45.347 [2024-11-20 07:30:07.463583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11e4000 00:31:45.347 [2024-11-20 07:30:07.463607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e4000 (9): Bad file descriptor 00:31:45.347 [2024-11-20 07:30:07.463621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:45.347 [2024-11-20 07:30:07.463629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:45.347 [2024-11-20 07:30:07.463638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:45.347 [2024-11-20 07:30:07.463649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:45.347 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.347 07:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:46.288 07:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3738676 00:31:46.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3738676) - No such process 00:31:46.288 07:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:46.288 07:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:46.288 07:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:46.288 07:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:46.288 07:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:46.288 07:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:46.288 07:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:46.288 07:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:46.288 { 00:31:46.288 "params": { 00:31:46.288 "name": "Nvme$subsystem", 00:31:46.288 "trtype": "$TEST_TRANSPORT", 00:31:46.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:46.288 "adrfam": "ipv4", 00:31:46.288 "trsvcid": "$NVMF_PORT", 00:31:46.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:46.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:46.288 "hdgst": ${hdgst:-false}, 00:31:46.288 "ddgst": ${ddgst:-false} 00:31:46.288 }, 00:31:46.288 "method": "bdev_nvme_attach_controller" 00:31:46.288 } 00:31:46.288 EOF 00:31:46.288 )") 00:31:46.288 07:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:46.288 07:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:46.288 07:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:46.288 07:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:46.288 "params": { 00:31:46.288 "name": "Nvme0", 00:31:46.288 "trtype": "tcp", 00:31:46.288 "traddr": "10.0.0.2", 00:31:46.288 "adrfam": "ipv4", 00:31:46.288 "trsvcid": "4420", 00:31:46.288 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:46.288 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:46.288 "hdgst": false, 00:31:46.288 "ddgst": false 00:31:46.288 }, 00:31:46.288 "method": "bdev_nvme_attach_controller" 00:31:46.288 }' 00:31:46.288 [2024-11-20 07:30:08.536643] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:31:46.288 [2024-11-20 07:30:08.536721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739073 ] 00:31:46.548 [2024-11-20 07:30:08.628049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.548 [2024-11-20 07:30:08.682457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.808 Running I/O for 1 seconds... 00:31:48.204 1766.00 IOPS, 110.38 MiB/s 00:31:48.204 Latency(us) 00:31:48.204 [2024-11-20T06:30:10.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.204 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:48.204 Verification LBA range: start 0x0 length 0x400 00:31:48.204 Nvme0n1 : 1.02 1807.50 112.97 0.00 0.00 34625.65 4532.91 35607.89 00:31:48.204 [2024-11-20T06:30:10.482Z] =================================================================================================================== 00:31:48.205 [2024-11-20T06:30:10.483Z] Total : 1807.50 112.97 0.00 0.00 34625.65 4532.91 35607.89 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:48.205 rmmod nvme_tcp 00:31:48.205 rmmod nvme_fabrics 00:31:48.205 rmmod nvme_keyring 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3738580 ']' 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3738580 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3738580 ']' 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3738580 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3738580 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3738580' 00:31:48.205 killing process with pid 3738580 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3738580 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3738580 00:31:48.205 [2024-11-20 07:30:10.415089] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.205 07:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:50.754 00:31:50.754 real 0m14.900s 00:31:50.754 user 0m20.225s 00:31:50.754 sys 0m7.493s 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:50.754 ************************************ 00:31:50.754 END TEST nvmf_host_management 00:31:50.754 ************************************ 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:50.754 ************************************ 00:31:50.754 START TEST nvmf_lvol 00:31:50.754 ************************************ 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:50.754 * Looking for test storage... 00:31:50.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.754 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:50.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.755 --rc genhtml_branch_coverage=1 00:31:50.755 --rc genhtml_function_coverage=1 00:31:50.755 --rc genhtml_legend=1 00:31:50.755 --rc geninfo_all_blocks=1 00:31:50.755 --rc geninfo_unexecuted_blocks=1 00:31:50.755 00:31:50.755 ' 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:50.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.755 --rc genhtml_branch_coverage=1 00:31:50.755 --rc genhtml_function_coverage=1 00:31:50.755 --rc genhtml_legend=1 00:31:50.755 --rc geninfo_all_blocks=1 00:31:50.755 --rc geninfo_unexecuted_blocks=1 00:31:50.755 00:31:50.755 ' 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:50.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.755 --rc genhtml_branch_coverage=1 00:31:50.755 --rc genhtml_function_coverage=1 00:31:50.755 --rc genhtml_legend=1 00:31:50.755 --rc geninfo_all_blocks=1 00:31:50.755 --rc geninfo_unexecuted_blocks=1 00:31:50.755 00:31:50.755 ' 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:50.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.755 --rc genhtml_branch_coverage=1 00:31:50.755 --rc genhtml_function_coverage=1 00:31:50.755 --rc genhtml_legend=1 00:31:50.755 --rc geninfo_all_blocks=1 00:31:50.755 --rc geninfo_unexecuted_blocks=1 00:31:50.755 00:31:50.755 ' 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:50.755 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:50.756 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:58.903 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:58.903 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:58.903 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:58.904 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:58.904 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:58.904 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:58.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:58.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:31:58.904 00:31:58.904 --- 10.0.0.2 ping statistics --- 00:31:58.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.904 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:58.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:58.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:31:58.904 00:31:58.904 --- 10.0.0.1 ping statistics --- 00:31:58.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.904 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3744149 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3744149 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3744149 ']' 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:58.904 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:58.904 [2024-11-20 07:30:20.416443] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:58.904 [2024-11-20 07:30:20.417552] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:31:58.904 [2024-11-20 07:30:20.417603] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:58.904 [2024-11-20 07:30:20.518994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:58.904 [2024-11-20 07:30:20.570199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:58.904 [2024-11-20 07:30:20.570249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:58.904 [2024-11-20 07:30:20.570257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:58.904 [2024-11-20 07:30:20.570264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:58.904 [2024-11-20 07:30:20.570271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:58.904 [2024-11-20 07:30:20.572330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:58.904 [2024-11-20 07:30:20.572490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.904 [2024-11-20 07:30:20.572489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:58.904 [2024-11-20 07:30:20.649235] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:58.904 [2024-11-20 07:30:20.650284] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:58.904 [2024-11-20 07:30:20.651138] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:58.904 [2024-11-20 07:30:20.651232] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:59.166 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:59.166 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:31:59.166 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:59.166 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:59.166 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:59.166 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.166 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:59.166 [2024-11-20 07:30:21.429399] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.427 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:59.427 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:59.427 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:59.689 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:59.689 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:59.952 07:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:00.213 07:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ea5f0ec5-0dbc-4a28-8886-a129aee99208 00:32:00.213 07:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ea5f0ec5-0dbc-4a28-8886-a129aee99208 lvol 20 00:32:00.213 07:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=379336ea-090b-479a-9729-ae19393ad3d6 00:32:00.213 07:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:00.473 07:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 379336ea-090b-479a-9729-ae19393ad3d6 00:32:00.735 07:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:00.735 [2024-11-20 07:30:23.009328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.997 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:00.997 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3744624 00:32:00.997 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:00.997 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:02.386 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 379336ea-090b-479a-9729-ae19393ad3d6 MY_SNAPSHOT 00:32:02.386 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=82f2967d-cfc9-4730-97ae-112ce45acd73 00:32:02.386 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 379336ea-090b-479a-9729-ae19393ad3d6 30 00:32:02.647 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 82f2967d-cfc9-4730-97ae-112ce45acd73 MY_CLONE 00:32:02.647 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b8e78390-dca8-4ab9-a16c-b89a3bf9054a 00:32:02.647 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b8e78390-dca8-4ab9-a16c-b89a3bf9054a 00:32:03.218 07:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3744624 00:32:11.355 Initializing NVMe Controllers 00:32:11.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:11.356 Controller IO queue size 128, less than required. 00:32:11.356 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:11.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:11.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:11.356 Initialization complete. Launching workers. 00:32:11.356 ======================================================== 00:32:11.356 Latency(us) 00:32:11.356 Device Information : IOPS MiB/s Average min max 00:32:11.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15394.38 60.13 8317.16 1885.24 93125.66 00:32:11.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15566.48 60.81 8224.29 512.24 77526.00 00:32:11.356 ======================================================== 00:32:11.356 Total : 30960.86 120.94 8270.47 512.24 93125.66 00:32:11.356 00:32:11.356 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:11.616 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 379336ea-090b-479a-9729-ae19393ad3d6 00:32:11.877 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ea5f0ec5-0dbc-4a28-8886-a129aee99208 00:32:11.877 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:11.877 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:11.877 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:11.877 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:11.878 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:11.878 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:11.878 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:11.878 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:11.878 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:11.878 rmmod nvme_tcp 00:32:11.878 rmmod nvme_fabrics 00:32:11.878 rmmod nvme_keyring 00:32:11.878 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:11.878 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:11.878 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:11.878 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3744149 ']' 00:32:11.878 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3744149 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3744149 ']' 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3744149 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3744149 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3744149' 00:32:12.138 killing process with pid 3744149 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3744149 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3744149 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.138 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:12.139 07:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:14.685 00:32:14.685 real 0m23.825s 00:32:14.685 user 0m55.980s 00:32:14.685 sys 0m10.620s 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:14.685 ************************************ 00:32:14.685 END TEST nvmf_lvol 00:32:14.685 ************************************ 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:14.685 ************************************ 00:32:14.685 START TEST nvmf_lvs_grow 00:32:14.685 ************************************ 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:14.685 * Looking for test storage... 00:32:14.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:14.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.685 --rc genhtml_branch_coverage=1 00:32:14.685 --rc genhtml_function_coverage=1 00:32:14.685 --rc genhtml_legend=1 00:32:14.685 --rc geninfo_all_blocks=1 00:32:14.685 --rc geninfo_unexecuted_blocks=1 00:32:14.685 00:32:14.685 ' 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:14.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.685 --rc genhtml_branch_coverage=1 00:32:14.685 --rc genhtml_function_coverage=1 00:32:14.685 --rc genhtml_legend=1 00:32:14.685 --rc geninfo_all_blocks=1 00:32:14.685 --rc geninfo_unexecuted_blocks=1 00:32:14.685 00:32:14.685 ' 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:14.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.685 --rc genhtml_branch_coverage=1 00:32:14.685 --rc genhtml_function_coverage=1 00:32:14.685 --rc genhtml_legend=1 00:32:14.685 --rc geninfo_all_blocks=1 00:32:14.685 --rc geninfo_unexecuted_blocks=1 00:32:14.685 00:32:14.685 ' 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:14.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.685 --rc genhtml_branch_coverage=1 00:32:14.685 --rc genhtml_function_coverage=1 00:32:14.685 --rc genhtml_legend=1 00:32:14.685 --rc geninfo_all_blocks=1 00:32:14.685 --rc geninfo_unexecuted_blocks=1 00:32:14.685 00:32:14.685 ' 00:32:14.685 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:14.686 07:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:22.832 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:22.832 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.832 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:22.833 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:22.833 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:22.833 07:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:22.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:22.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:32:22.833 00:32:22.833 --- 10.0.0.2 ping statistics --- 00:32:22.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.833 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:22.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:22.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:32:22.833 00:32:22.833 --- 10.0.0.1 ping statistics --- 00:32:22.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.833 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3750862 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3750862 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3750862 ']' 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:22.833 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:22.833 [2024-11-20 07:30:44.284534] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:22.833 [2024-11-20 07:30:44.285685] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:32:22.833 [2024-11-20 07:30:44.285740] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.833 [2024-11-20 07:30:44.387229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.833 [2024-11-20 07:30:44.438171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:22.833 [2024-11-20 07:30:44.438220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:22.833 [2024-11-20 07:30:44.438228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:22.833 [2024-11-20 07:30:44.438235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:22.833 [2024-11-20 07:30:44.438242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:22.833 [2024-11-20 07:30:44.439011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.834 [2024-11-20 07:30:44.514823] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:22.834 [2024-11-20 07:30:44.515105] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:22.834 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:22.834 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:32:22.834 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:22.834 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:22.834 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:23.094 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:23.094 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:23.094 [2024-11-20 07:30:45.311893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.094 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:23.094 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:23.094 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:23.094 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:23.094 ************************************ 00:32:23.094 START TEST lvs_grow_clean 00:32:23.094 ************************************ 00:32:23.095 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:32:23.095 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:23.095 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:23.095 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:23.095 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:23.095 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:23.095 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:23.095 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:23.355 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:23.355 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:23.355 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:23.355 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:23.616 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a586a4b1-345c-4912-ae9b-e6a854be7a21 00:32:23.616 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a586a4b1-345c-4912-ae9b-e6a854be7a21 00:32:23.616 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:23.877 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:23.877 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:23.878 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a586a4b1-345c-4912-ae9b-e6a854be7a21 lvol 150 00:32:23.878 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a4b0d827-e06c-40c3-ab3d-e3ed14c7afee 00:32:23.878 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:23.878 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:24.139 [2024-11-20 07:30:46.311566] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:24.139 [2024-11-20 07:30:46.311730] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:24.139 true 00:32:24.139 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a586a4b1-345c-4912-ae9b-e6a854be7a21 00:32:24.139 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:24.400 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:24.400 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:24.660 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a4b0d827-e06c-40c3-ab3d-e3ed14c7afee 00:32:24.660 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:24.921 [2024-11-20 07:30:47.032271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:24.921 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:25.181 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:25.181 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3751569 00:32:25.181 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:25.181 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3751569 /var/tmp/bdevperf.sock 00:32:25.181 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3751569 ']' 00:32:25.181 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:25.181 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:25.181 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:25.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:25.181 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:25.181 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:25.181 [2024-11-20 07:30:47.254426] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:32:25.181 [2024-11-20 07:30:47.254497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751569 ] 00:32:25.181 [2024-11-20 07:30:47.346678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.181 [2024-11-20 07:30:47.398775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.123 07:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:26.123 07:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:32:26.123 07:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:26.123 Nvme0n1 00:32:26.123 07:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:26.384 [ 00:32:26.384 { 00:32:26.384 "name": "Nvme0n1", 00:32:26.384 "aliases": [ 00:32:26.384 "a4b0d827-e06c-40c3-ab3d-e3ed14c7afee" 00:32:26.384 ], 00:32:26.384 "product_name": "NVMe disk", 00:32:26.384 "block_size": 4096, 00:32:26.384 "num_blocks": 38912, 00:32:26.384 "uuid": "a4b0d827-e06c-40c3-ab3d-e3ed14c7afee", 00:32:26.384 "numa_id": 0, 00:32:26.384 "assigned_rate_limits": { 00:32:26.384 "rw_ios_per_sec": 0, 00:32:26.384 "rw_mbytes_per_sec": 0, 00:32:26.384 "r_mbytes_per_sec": 0, 00:32:26.384 "w_mbytes_per_sec": 0 00:32:26.384 }, 00:32:26.384 "claimed": false, 00:32:26.384 "zoned": false, 00:32:26.384 "supported_io_types": { 00:32:26.384 "read": true, 00:32:26.384 "write": true, 00:32:26.384 "unmap": true, 00:32:26.384 "flush": true, 00:32:26.384 "reset": true, 00:32:26.384 "nvme_admin": true, 00:32:26.384 "nvme_io": true, 00:32:26.384 "nvme_io_md": false, 00:32:26.384 "write_zeroes": true, 00:32:26.384 "zcopy": false, 00:32:26.384 "get_zone_info": false, 00:32:26.384 "zone_management": false, 00:32:26.384 "zone_append": false, 00:32:26.384 "compare": true, 00:32:26.384 "compare_and_write": true, 00:32:26.384 "abort": true, 00:32:26.384 "seek_hole": false, 00:32:26.384 "seek_data": false, 00:32:26.384 "copy": true, 00:32:26.384 "nvme_iov_md": false 00:32:26.384 }, 00:32:26.384 "memory_domains": [ 00:32:26.384 { 00:32:26.384 "dma_device_id": "system", 00:32:26.384 "dma_device_type": 1 00:32:26.384 } 00:32:26.384 ], 00:32:26.384 "driver_specific": { 00:32:26.384 "nvme": [ 00:32:26.384 { 00:32:26.384 "trid": { 00:32:26.384 "trtype": "TCP", 00:32:26.384 "adrfam": "IPv4", 00:32:26.384 "traddr": "10.0.0.2", 00:32:26.384 "trsvcid": "4420", 00:32:26.384 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:26.384 }, 00:32:26.384 "ctrlr_data": { 00:32:26.384 "cntlid": 1, 00:32:26.384 "vendor_id": "0x8086", 00:32:26.384 "model_number": "SPDK bdev Controller", 00:32:26.384 "serial_number": "SPDK0", 00:32:26.384 "firmware_revision": "25.01", 00:32:26.384 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:26.384 "oacs": { 00:32:26.384 "security": 0, 00:32:26.384 "format": 0, 00:32:26.384 "firmware": 0, 00:32:26.384 "ns_manage": 0 00:32:26.384 }, 00:32:26.384 "multi_ctrlr": true, 00:32:26.384 "ana_reporting": false 00:32:26.384 }, 00:32:26.384 "vs": { 00:32:26.384 "nvme_version": "1.3" 00:32:26.384 }, 00:32:26.384 "ns_data": { 00:32:26.384 "id": 1, 00:32:26.384 "can_share": true 00:32:26.384 } 00:32:26.384 } 00:32:26.384 ], 00:32:26.384 "mp_policy": "active_passive" 00:32:26.384 } 00:32:26.384 } 00:32:26.384 ] 00:32:26.384 07:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3751749 00:32:26.384 07:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:26.384 07:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:26.646 Running I/O for 10 seconds... 00:32:27.588 Latency(us) 00:32:27.588 [2024-11-20T06:30:49.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.588 Nvme0n1 : 1.00 16791.00 65.59 0.00 0.00 0.00 0.00 0.00 00:32:27.588 [2024-11-20T06:30:49.866Z] =================================================================================================================== 00:32:27.588 [2024-11-20T06:30:49.866Z] Total : 16791.00 65.59 0.00 0.00 0.00 0.00 0.00 00:32:27.588 00:32:28.531 07:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a586a4b1-345c-4912-ae9b-e6a854be7a21 00:32:28.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:28.531 Nvme0n1 : 2.00 17031.50 66.53 0.00 0.00 0.00 0.00 0.00 00:32:28.531 [2024-11-20T06:30:50.809Z] =================================================================================================================== 00:32:28.531 [2024-11-20T06:30:50.809Z] Total : 17031.50 66.53 0.00 0.00 0.00 0.00 0.00 00:32:28.531 00:32:28.531 true 00:32:28.531 07:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a586a4b1-345c-4912-ae9b-e6a854be7a21 00:32:28.531 07:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:28.791 07:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:28.791 07:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:28.791 07:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3751749 00:32:29.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:29.734 Nvme0n1 : 3.00 17281.00 67.50 0.00 0.00 0.00 0.00 0.00 00:32:29.734 [2024-11-20T06:30:52.012Z] =================================================================================================================== 00:32:29.734 [2024-11-20T06:30:52.012Z] Total : 17281.00 67.50 0.00 0.00 0.00 0.00 0.00 00:32:29.734 00:32:30.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:30.673 Nvme0n1 : 4.00 17818.50 69.60 0.00 0.00 0.00 0.00 0.00 00:32:30.673 [2024-11-20T06:30:52.951Z] =================================================================================================================== 00:32:30.673 [2024-11-20T06:30:52.951Z] Total : 17818.50 69.60 0.00 0.00 0.00 0.00 0.00 00:32:30.673 00:32:31.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:31.614 Nvme0n1 : 5.00 19334.80 75.53 0.00 0.00 0.00 0.00 0.00 00:32:31.614 [2024-11-20T06:30:53.892Z] =================================================================================================================== 00:32:31.614 [2024-11-20T06:30:53.892Z] Total : 19334.80 75.53 0.00 0.00 0.00 0.00 0.00 00:32:31.614 00:32:32.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:32.555 Nvme0n1 : 6.00 20366.83 79.56 0.00 0.00 0.00 0.00 0.00 00:32:32.555 [2024-11-20T06:30:54.833Z] =================================================================================================================== 00:32:32.555 [2024-11-20T06:30:54.833Z] Total : 20366.83 79.56 0.00 0.00 0.00 0.00 0.00 00:32:32.555 00:32:33.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:33.498 Nvme0n1 : 7.00 21095.00 82.40 0.00 0.00 0.00 0.00 0.00 00:32:33.498 [2024-11-20T06:30:55.776Z] =================================================================================================================== 00:32:33.498 [2024-11-20T06:30:55.776Z] Total : 21095.00 82.40 0.00 0.00 0.00 0.00 0.00 00:32:33.498 00:32:34.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:34.439 Nvme0n1 : 8.00 21647.12 84.56 0.00 0.00 0.00 0.00 0.00 00:32:34.439 [2024-11-20T06:30:56.717Z] =================================================================================================================== 00:32:34.439 [2024-11-20T06:30:56.717Z] Total : 21647.12 84.56 0.00 0.00 0.00 0.00 0.00 00:32:34.439 00:32:35.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:35.823 Nvme0n1 : 9.00 22078.22 86.24 0.00 0.00 0.00 0.00 0.00 00:32:35.823 [2024-11-20T06:30:58.101Z] =================================================================================================================== 00:32:35.823 [2024-11-20T06:30:58.101Z] Total : 22078.22 86.24 0.00 0.00 0.00 0.00 0.00 00:32:35.823 00:32:36.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:36.537 Nvme0n1 : 10.00 22423.10 87.59 0.00 0.00 0.00 0.00 0.00 00:32:36.537 [2024-11-20T06:30:58.815Z] =================================================================================================================== 00:32:36.537 [2024-11-20T06:30:58.816Z] Total : 22423.10 87.59 0.00 0.00 0.00 0.00 0.00 00:32:36.538 00:32:36.538 00:32:36.538 Latency(us) 00:32:36.538 [2024-11-20T06:30:58.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:36.538 Nvme0n1 : 10.00 22428.33 87.61 0.00 0.00 5703.99 2880.85 27743.57 00:32:36.538 [2024-11-20T06:30:58.816Z] =================================================================================================================== 00:32:36.538 [2024-11-20T06:30:58.816Z] Total : 22428.33 87.61 0.00 0.00 5703.99 2880.85 27743.57 00:32:36.538 { 00:32:36.538 "results": [ 00:32:36.538 { 00:32:36.538 "job": "Nvme0n1", 00:32:36.538 "core_mask": "0x2", 00:32:36.538 "workload": "randwrite", 00:32:36.538 "status": "finished", 00:32:36.538 "queue_depth": 128, 00:32:36.538 "io_size": 4096, 00:32:36.538 "runtime": 10.003374, 00:32:36.538 "iops": 22428.33268055358, 00:32:36.538 "mibps": 87.61067453341242, 00:32:36.538 "io_failed": 0, 00:32:36.538 "io_timeout": 0, 00:32:36.538 "avg_latency_us": 5703.987910328239, 00:32:36.538 "min_latency_us": 2880.8533333333335, 00:32:36.538 "max_latency_us": 27743.573333333334 00:32:36.538 } 00:32:36.538 ], 00:32:36.538 "core_count": 1 00:32:36.538 } 00:32:36.538 07:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3751569 00:32:36.538 07:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3751569 ']' 00:32:36.538 07:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3751569 00:32:36.538 07:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:32:36.538 07:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:36.538 07:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3751569 00:32:36.816 07:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:36.816 07:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:36.816 07:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3751569' 00:32:36.816 killing process with pid 3751569 00:32:36.816 07:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3751569 00:32:36.816 Received shutdown signal, test time was about 10.000000 seconds 00:32:36.816 00:32:36.816 Latency(us) 00:32:36.816 [2024-11-20T06:30:59.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.816 [2024-11-20T06:30:59.094Z] =================================================================================================================== 00:32:36.816 [2024-11-20T06:30:59.094Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:36.816 07:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3751569 00:32:36.816 07:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:36.816 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:37.077 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a586a4b1-345c-4912-ae9b-e6a854be7a21 00:32:37.077 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:37.338 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:37.338 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:37.338 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:37.599 [2024-11-20 07:30:59.619650] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a586a4b1-345c-4912-ae9b-e6a854be7a21 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a586a4b1-345c-4912-ae9b-e6a854be7a21 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a586a4b1-345c-4912-ae9b-e6a854be7a21 00:32:37.599 request: 00:32:37.599 { 00:32:37.599 "uuid": "a586a4b1-345c-4912-ae9b-e6a854be7a21", 00:32:37.599 "method": "bdev_lvol_get_lvstores", 00:32:37.599 "req_id": 1 00:32:37.599 } 00:32:37.599 Got JSON-RPC error response 00:32:37.599 response: 00:32:37.599 { 00:32:37.599 "code": -19, 00:32:37.599 "message": "No such device" 00:32:37.599 } 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:37.599 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:37.860 aio_bdev 00:32:37.860 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a4b0d827-e06c-40c3-ab3d-e3ed14c7afee 00:32:37.860 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=a4b0d827-e06c-40c3-ab3d-e3ed14c7afee 00:32:37.860 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:37.860 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:32:37.860 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:37.860 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:37.860 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:38.121 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a4b0d827-e06c-40c3-ab3d-e3ed14c7afee -t 2000 00:32:38.121 [ 00:32:38.121 { 00:32:38.121 "name": "a4b0d827-e06c-40c3-ab3d-e3ed14c7afee", 00:32:38.121 "aliases": [ 00:32:38.121 "lvs/lvol" 00:32:38.121 ], 00:32:38.121 "product_name": "Logical Volume", 00:32:38.121 "block_size": 4096, 00:32:38.121 "num_blocks": 38912, 00:32:38.121 "uuid": "a4b0d827-e06c-40c3-ab3d-e3ed14c7afee", 00:32:38.121 "assigned_rate_limits": { 00:32:38.121 "rw_ios_per_sec": 0, 00:32:38.121 "rw_mbytes_per_sec": 0, 00:32:38.121 "r_mbytes_per_sec": 0, 00:32:38.121 "w_mbytes_per_sec": 0 00:32:38.121 }, 00:32:38.121 "claimed": false, 00:32:38.121 "zoned": false, 00:32:38.121 "supported_io_types": { 00:32:38.121 "read": true, 00:32:38.121 "write": true, 00:32:38.121 "unmap": true, 00:32:38.121 "flush": false, 00:32:38.121 "reset": true, 00:32:38.121 "nvme_admin": false, 00:32:38.121 "nvme_io": false, 00:32:38.121 "nvme_io_md": false, 00:32:38.121 "write_zeroes": true, 00:32:38.121 "zcopy": false, 00:32:38.121 "get_zone_info": false, 00:32:38.121 "zone_management": false, 00:32:38.121 "zone_append": false, 00:32:38.121 "compare": false, 00:32:38.121 "compare_and_write": false, 00:32:38.121 "abort": false, 00:32:38.121 "seek_hole": true, 00:32:38.121 "seek_data": true, 00:32:38.121 "copy": false, 00:32:38.121 "nvme_iov_md": false 00:32:38.121 }, 00:32:38.121 "driver_specific": { 00:32:38.121 "lvol": { 00:32:38.121 "lvol_store_uuid": "a586a4b1-345c-4912-ae9b-e6a854be7a21", 00:32:38.121 "base_bdev": "aio_bdev", 00:32:38.121 "thin_provision": false, 00:32:38.121 "num_allocated_clusters": 38, 00:32:38.122 "snapshot": false, 00:32:38.122 "clone": false, 00:32:38.122 "esnap_clone": false 00:32:38.122 } 00:32:38.122 } 00:32:38.122 } 00:32:38.122 ] 00:32:38.122 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:32:38.122 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a586a4b1-345c-4912-ae9b-e6a854be7a21 00:32:38.122 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:38.383 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:38.383 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a586a4b1-345c-4912-ae9b-e6a854be7a21 00:32:38.383 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:38.644 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:38.644 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a4b0d827-e06c-40c3-ab3d-e3ed14c7afee 00:32:38.906 07:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a586a4b1-345c-4912-ae9b-e6a854be7a21 00:32:38.906 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:39.167 00:32:39.167 real 0m15.982s 00:32:39.167 user 0m15.596s 00:32:39.167 sys 0m1.496s 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:39.167 ************************************ 00:32:39.167 END TEST lvs_grow_clean 00:32:39.167 ************************************ 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:39.167 ************************************ 00:32:39.167 START TEST lvs_grow_dirty 00:32:39.167 ************************************ 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:39.167 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:39.428 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:39.428 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:39.689 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=568494f2-6f4d-4f54-af65-d0fe6723fcec 00:32:39.689 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 568494f2-6f4d-4f54-af65-d0fe6723fcec 00:32:39.689 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:39.948 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:39.948 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:39.948 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 568494f2-6f4d-4f54-af65-d0fe6723fcec lvol 150 00:32:39.948 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f94eda16-742a-4d3d-8928-1d613f0996f8 00:32:39.948 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:39.949 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:40.208 [2024-11-20 07:31:02.323564] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:40.208 [2024-11-20 07:31:02.323728] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:40.208 true 00:32:40.208 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 568494f2-6f4d-4f54-af65-d0fe6723fcec 00:32:40.208 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:40.468 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:40.468 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:40.468 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f94eda16-742a-4d3d-8928-1d613f0996f8 00:32:40.728 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:40.988 [2024-11-20 07:31:03.032039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:40.988 07:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:40.988 07:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3754619 00:32:40.988 07:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:40.988 07:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:40.988 07:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3754619 /var/tmp/bdevperf.sock 00:32:40.988 07:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3754619 ']' 00:32:40.988 07:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:40.988 07:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:40.988 07:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:40.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:40.988 07:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:40.988 07:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:41.248 [2024-11-20 07:31:03.266710] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:32:41.248 [2024-11-20 07:31:03.266766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754619 ] 00:32:41.248 [2024-11-20 07:31:03.353897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.248 [2024-11-20 07:31:03.397100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.820 07:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:41.820 07:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:41.820 07:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:42.080 Nvme0n1 00:32:42.080 07:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:42.341 [ 00:32:42.341 { 00:32:42.341 "name": "Nvme0n1", 00:32:42.341 "aliases": [ 00:32:42.341 "f94eda16-742a-4d3d-8928-1d613f0996f8" 00:32:42.341 ], 00:32:42.341 "product_name": "NVMe disk", 00:32:42.341 "block_size": 4096, 00:32:42.341 "num_blocks": 38912, 00:32:42.341 "uuid": "f94eda16-742a-4d3d-8928-1d613f0996f8", 00:32:42.341 "numa_id": 0, 00:32:42.341 "assigned_rate_limits": { 00:32:42.341 "rw_ios_per_sec": 0, 00:32:42.341 "rw_mbytes_per_sec": 0, 00:32:42.341 "r_mbytes_per_sec": 0, 00:32:42.341 "w_mbytes_per_sec": 0 00:32:42.341 }, 00:32:42.341 "claimed": false, 00:32:42.341 "zoned": false, 00:32:42.341 "supported_io_types": { 00:32:42.341 "read": true, 00:32:42.341 "write": true, 00:32:42.341 "unmap": true, 00:32:42.341 "flush": true, 00:32:42.341 "reset": true, 00:32:42.341 "nvme_admin": true, 00:32:42.341 "nvme_io": true, 00:32:42.341 "nvme_io_md": false, 00:32:42.341 "write_zeroes": true, 00:32:42.341 "zcopy": false, 00:32:42.341 "get_zone_info": false, 00:32:42.341 "zone_management": false, 00:32:42.341 "zone_append": false, 00:32:42.341 "compare": true, 00:32:42.341 "compare_and_write": true, 00:32:42.341 "abort": true, 00:32:42.341 "seek_hole": false, 00:32:42.341 "seek_data": false, 00:32:42.341 "copy": true, 00:32:42.341 "nvme_iov_md": false 00:32:42.341 }, 00:32:42.341 "memory_domains": [ 00:32:42.341 { 00:32:42.341 "dma_device_id": "system", 00:32:42.341 "dma_device_type": 1 00:32:42.341 } 00:32:42.341 ], 00:32:42.341 "driver_specific": { 00:32:42.341 "nvme": [ 00:32:42.341 { 00:32:42.341 "trid": { 00:32:42.341 "trtype": "TCP", 00:32:42.341 "adrfam": "IPv4", 00:32:42.341 "traddr": "10.0.0.2", 00:32:42.341 "trsvcid": "4420", 00:32:42.342 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:42.342 }, 00:32:42.342 "ctrlr_data": { 00:32:42.342 "cntlid": 1, 00:32:42.342 "vendor_id": "0x8086", 00:32:42.342 "model_number": "SPDK bdev Controller", 00:32:42.342 "serial_number": "SPDK0", 00:32:42.342 "firmware_revision": "25.01", 00:32:42.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:42.342 "oacs": { 00:32:42.342 "security": 0, 00:32:42.342 "format": 0, 00:32:42.342 "firmware": 0, 00:32:42.342 "ns_manage": 0 00:32:42.342 }, 00:32:42.342 "multi_ctrlr": true, 00:32:42.342 "ana_reporting": false 00:32:42.342 }, 00:32:42.342 "vs": { 00:32:42.342 "nvme_version": "1.3" 00:32:42.342 }, 00:32:42.342 "ns_data": { 00:32:42.342 "id": 1, 00:32:42.342 "can_share": true 00:32:42.342 } 00:32:42.342 } 00:32:42.342 ], 00:32:42.342 "mp_policy": "active_passive" 00:32:42.342 } 00:32:42.342 } 00:32:42.342 ] 00:32:42.342 07:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3754746 00:32:42.342 07:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:42.342 07:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:42.602 Running I/O for 10 seconds... 00:32:43.542 Latency(us) 00:32:43.542 [2024-11-20T06:31:05.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.542 Nvme0n1 : 1.00 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:32:43.542 [2024-11-20T06:31:05.820Z] =================================================================================================================== 00:32:43.542 [2024-11-20T06:31:05.820Z] Total : 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:32:43.542 00:32:44.484 07:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 568494f2-6f4d-4f54-af65-d0fe6723fcec 00:32:44.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.484 Nvme0n1 : 2.00 16986.50 66.35 0.00 0.00 0.00 0.00 0.00 00:32:44.484 [2024-11-20T06:31:06.762Z] =================================================================================================================== 00:32:44.484 [2024-11-20T06:31:06.762Z] Total : 16986.50 66.35 0.00 0.00 0.00 0.00 0.00 00:32:44.484 00:32:44.484 true 00:32:44.484 07:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 568494f2-6f4d-4f54-af65-d0fe6723fcec 00:32:44.484 07:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:44.745 07:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:44.745 07:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:44.745 07:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3754746 00:32:45.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:45.688 Nvme0n1 : 3.00 17076.67 66.71 0.00 0.00 0.00 0.00 0.00 00:32:45.688 [2024-11-20T06:31:07.966Z] =================================================================================================================== 00:32:45.688 [2024-11-20T06:31:07.966Z] Total : 17076.67 66.71 0.00 0.00 0.00 0.00 0.00 00:32:45.688 00:32:46.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:46.630 Nvme0n1 : 4.00 17220.75 67.27 0.00 0.00 0.00 0.00 0.00 00:32:46.630 [2024-11-20T06:31:08.908Z] =================================================================================================================== 00:32:46.630 [2024-11-20T06:31:08.908Z] Total : 17220.75 67.27 0.00 0.00 0.00 0.00 0.00 00:32:46.630 00:32:47.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:47.571 Nvme0n1 : 5.00 18323.20 71.58 0.00 0.00 0.00 0.00 0.00 00:32:47.571 [2024-11-20T06:31:09.849Z] =================================================================================================================== 00:32:47.571 [2024-11-20T06:31:09.849Z] Total : 18323.20 71.58 0.00 0.00 0.00 0.00 0.00 00:32:47.571 00:32:48.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:48.520 Nvme0n1 : 6.00 19481.50 76.10 0.00 0.00 0.00 0.00 0.00 00:32:48.520 [2024-11-20T06:31:10.798Z] =================================================================================================================== 00:32:48.520 [2024-11-20T06:31:10.798Z] Total : 19481.50 76.10 0.00 0.00 0.00 0.00 0.00 00:32:48.520 00:32:49.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:49.461 Nvme0n1 : 7.00 20327.00 79.40 0.00 0.00 0.00 0.00 0.00 00:32:49.461 [2024-11-20T06:31:11.739Z] =================================================================================================================== 00:32:49.461 [2024-11-20T06:31:11.739Z] Total : 20327.00 79.40 0.00 0.00 0.00 0.00 0.00 00:32:49.461 00:32:50.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:50.404 Nvme0n1 : 8.00 20961.12 81.88 0.00 0.00 0.00 0.00 0.00 00:32:50.404 [2024-11-20T06:31:12.682Z] =================================================================================================================== 00:32:50.404 [2024-11-20T06:31:12.682Z] Total : 20961.12 81.88 0.00 0.00 0.00 0.00 0.00 00:32:50.404 00:32:51.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:51.790 Nvme0n1 : 9.00 21454.33 83.81 0.00 0.00 0.00 0.00 0.00 00:32:51.790 [2024-11-20T06:31:14.068Z] =================================================================================================================== 00:32:51.790 [2024-11-20T06:31:14.068Z] Total : 21454.33 83.81 0.00 0.00 0.00 0.00 0.00 00:32:51.790 00:32:52.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:52.733 Nvme0n1 : 10.00 21848.90 85.35 0.00 0.00 0.00 0.00 0.00 00:32:52.733 [2024-11-20T06:31:15.011Z] =================================================================================================================== 00:32:52.733 [2024-11-20T06:31:15.011Z] Total : 21848.90 85.35 0.00 0.00 0.00 0.00 0.00 00:32:52.733 00:32:52.733 00:32:52.733 Latency(us) 00:32:52.733 [2024-11-20T06:31:15.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:52.733 Nvme0n1 : 10.00 21856.05 85.38 0.00 0.00 5853.22 4478.29 31675.73 00:32:52.733 [2024-11-20T06:31:15.011Z] =================================================================================================================== 00:32:52.733 [2024-11-20T06:31:15.011Z] Total : 21856.05 85.38 0.00 0.00 5853.22 4478.29 31675.73 00:32:52.733 { 00:32:52.733 "results": [ 00:32:52.733 { 00:32:52.733 "job": "Nvme0n1", 00:32:52.733 "core_mask": "0x2", 00:32:52.733 "workload": "randwrite", 00:32:52.733 "status": "finished", 00:32:52.733 "queue_depth": 128, 00:32:52.733 "io_size": 4096, 00:32:52.733 "runtime": 10.002585, 00:32:52.733 "iops": 21856.05021102045, 00:32:52.733 "mibps": 85.37519613679864, 00:32:52.733 "io_failed": 0, 00:32:52.733 "io_timeout": 0, 00:32:52.733 "avg_latency_us": 5853.216731727176, 00:32:52.733 "min_latency_us": 4478.293333333333, 00:32:52.733 "max_latency_us": 31675.733333333334 00:32:52.733 } 00:32:52.733 ], 00:32:52.733 "core_count": 1 00:32:52.733 } 00:32:52.733 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3754619 00:32:52.733 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3754619 ']' 00:32:52.733 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3754619 00:32:52.733 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:32:52.733 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:52.733 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3754619 00:32:52.733 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:52.733 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:52.733 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3754619' 00:32:52.733 killing process with pid 3754619 00:32:52.733 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3754619 00:32:52.733 Received shutdown signal, test time was about 10.000000 seconds 00:32:52.733 00:32:52.733 Latency(us) 00:32:52.733 [2024-11-20T06:31:15.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.733 [2024-11-20T06:31:15.011Z] =================================================================================================================== 00:32:52.733 [2024-11-20T06:31:15.011Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:52.733 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3754619 00:32:52.733 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:52.994 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:52.994 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 568494f2-6f4d-4f54-af65-d0fe6723fcec 00:32:52.994 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3750862 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3750862 00:32:53.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3750862 Killed "${NVMF_APP[@]}" "$@" 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3756823 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3756823 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3756823 ']' 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:53.257 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:53.257 [2024-11-20 07:31:15.509473] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:53.257 [2024-11-20 07:31:15.510463] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:32:53.257 [2024-11-20 07:31:15.510507] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.518 [2024-11-20 07:31:15.603103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.518 [2024-11-20 07:31:15.634980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.518 [2024-11-20 07:31:15.635007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.518 [2024-11-20 07:31:15.635012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.518 [2024-11-20 07:31:15.635017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.518 [2024-11-20 07:31:15.635021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.518 [2024-11-20 07:31:15.635483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.518 [2024-11-20 07:31:15.686998] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:53.518 [2024-11-20 07:31:15.687191] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:54.089 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:54.089 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:54.089 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:54.089 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:54.089 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:54.089 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:54.089 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:54.350 [2024-11-20 07:31:16.509863] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:54.350 [2024-11-20 07:31:16.510105] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:54.350 [2024-11-20 07:31:16.510209] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:54.350 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:54.350 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f94eda16-742a-4d3d-8928-1d613f0996f8 00:32:54.350 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=f94eda16-742a-4d3d-8928-1d613f0996f8 00:32:54.350 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:54.350 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:54.350 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:54.350 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:54.350 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:54.611 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f94eda16-742a-4d3d-8928-1d613f0996f8 -t 2000 00:32:54.873 [ 00:32:54.873 { 00:32:54.873 "name": "f94eda16-742a-4d3d-8928-1d613f0996f8", 00:32:54.873 "aliases": [ 00:32:54.873 "lvs/lvol" 00:32:54.873 ], 00:32:54.873 "product_name": "Logical Volume", 00:32:54.873 "block_size": 4096, 00:32:54.873 "num_blocks": 38912, 00:32:54.873 "uuid": "f94eda16-742a-4d3d-8928-1d613f0996f8", 00:32:54.873 "assigned_rate_limits": { 00:32:54.873 "rw_ios_per_sec": 0, 00:32:54.873 "rw_mbytes_per_sec": 0, 00:32:54.873 "r_mbytes_per_sec": 0, 00:32:54.873 "w_mbytes_per_sec": 0 00:32:54.873 }, 00:32:54.873 "claimed": false, 00:32:54.873 "zoned": false, 00:32:54.873 "supported_io_types": { 00:32:54.873 "read": true, 00:32:54.873 "write": true, 00:32:54.873 "unmap": true, 00:32:54.873 "flush": false, 00:32:54.873 "reset": true, 00:32:54.873 "nvme_admin": false, 00:32:54.873 "nvme_io": false, 00:32:54.873 "nvme_io_md": false, 00:32:54.873 "write_zeroes": true, 00:32:54.873 "zcopy": false, 00:32:54.873 "get_zone_info": false, 00:32:54.873 "zone_management": false, 00:32:54.873 "zone_append": false, 00:32:54.873 "compare": false, 00:32:54.873 "compare_and_write": false, 00:32:54.873 "abort": false, 00:32:54.873 "seek_hole": true, 00:32:54.873 "seek_data": true, 00:32:54.873 "copy": false, 00:32:54.873 "nvme_iov_md": false 00:32:54.873 }, 00:32:54.873 "driver_specific": { 00:32:54.873 "lvol": { 00:32:54.873 "lvol_store_uuid": "568494f2-6f4d-4f54-af65-d0fe6723fcec", 00:32:54.873 "base_bdev": "aio_bdev", 00:32:54.873 "thin_provision": false, 00:32:54.873 "num_allocated_clusters": 38, 00:32:54.873 "snapshot": false, 00:32:54.873 "clone": false, 00:32:54.873 "esnap_clone": false 00:32:54.873 } 00:32:54.873 } 00:32:54.873 } 00:32:54.873 ] 00:32:54.873 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:54.873 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 568494f2-6f4d-4f54-af65-d0fe6723fcec 00:32:54.873 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:54.873 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:54.873 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 568494f2-6f4d-4f54-af65-d0fe6723fcec 00:32:54.873 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:55.135 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:55.135 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:55.396 [2024-11-20 07:31:17.423964] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 568494f2-6f4d-4f54-af65-d0fe6723fcec 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 568494f2-6f4d-4f54-af65-d0fe6723fcec 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 568494f2-6f4d-4f54-af65-d0fe6723fcec 00:32:55.396 request: 00:32:55.396 { 00:32:55.396 "uuid": "568494f2-6f4d-4f54-af65-d0fe6723fcec", 00:32:55.396 "method": "bdev_lvol_get_lvstores", 00:32:55.396 "req_id": 1 00:32:55.396 } 00:32:55.396 Got JSON-RPC error response 00:32:55.396 response: 00:32:55.396 { 00:32:55.396 "code": -19, 00:32:55.396 "message": "No such device" 00:32:55.396 } 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:55.396 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:55.658 aio_bdev 00:32:55.658 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f94eda16-742a-4d3d-8928-1d613f0996f8 00:32:55.658 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=f94eda16-742a-4d3d-8928-1d613f0996f8 00:32:55.658 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:55.658 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:55.658 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:55.658 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:55.658 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:55.919 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f94eda16-742a-4d3d-8928-1d613f0996f8 -t 2000 00:32:55.919 [ 00:32:55.919 { 00:32:55.919 "name": "f94eda16-742a-4d3d-8928-1d613f0996f8", 00:32:55.919 "aliases": [ 00:32:55.919 "lvs/lvol" 00:32:55.919 ], 00:32:55.919 "product_name": "Logical Volume", 00:32:55.919 "block_size": 4096, 00:32:55.919 "num_blocks": 38912, 00:32:55.919 "uuid": "f94eda16-742a-4d3d-8928-1d613f0996f8", 00:32:55.919 "assigned_rate_limits": { 00:32:55.919 "rw_ios_per_sec": 0, 00:32:55.919 "rw_mbytes_per_sec": 0, 00:32:55.919 "r_mbytes_per_sec": 0, 00:32:55.919 "w_mbytes_per_sec": 0 00:32:55.919 }, 00:32:55.919 "claimed": false, 00:32:55.919 "zoned": false, 00:32:55.919 "supported_io_types": { 00:32:55.919 "read": true, 00:32:55.919 "write": true, 00:32:55.919 "unmap": true, 00:32:55.919 "flush": false, 00:32:55.919 "reset": true, 00:32:55.919 "nvme_admin": false, 00:32:55.919 "nvme_io": false, 00:32:55.919 "nvme_io_md": false, 00:32:55.919 "write_zeroes": true, 00:32:55.919 "zcopy": false, 00:32:55.919 "get_zone_info": false, 00:32:55.919 "zone_management": false, 00:32:55.919 "zone_append": false, 00:32:55.919 "compare": false, 00:32:55.919 "compare_and_write": false, 00:32:55.919 "abort": false, 00:32:55.919 "seek_hole": true, 00:32:55.919 "seek_data": true, 00:32:55.919 "copy": false, 00:32:55.919 "nvme_iov_md": false 00:32:55.919 }, 00:32:55.919 "driver_specific": { 00:32:55.919 "lvol": { 00:32:55.919 "lvol_store_uuid": "568494f2-6f4d-4f54-af65-d0fe6723fcec", 00:32:55.919 "base_bdev": "aio_bdev", 00:32:55.919 "thin_provision": false, 00:32:55.919 "num_allocated_clusters": 38, 00:32:55.919 "snapshot": false, 00:32:55.919 "clone": false, 00:32:55.919 "esnap_clone": false 00:32:55.919 } 00:32:55.919 } 00:32:55.919 } 00:32:55.919 ] 00:32:55.919 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:55.919 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 568494f2-6f4d-4f54-af65-d0fe6723fcec 00:32:55.919 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:56.180 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:56.180 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 568494f2-6f4d-4f54-af65-d0fe6723fcec 00:32:56.180 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:56.441 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:56.441 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f94eda16-742a-4d3d-8928-1d613f0996f8 00:32:56.441 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 568494f2-6f4d-4f54-af65-d0fe6723fcec 00:32:56.701 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:56.962 00:32:56.962 real 0m17.658s 00:32:56.962 user 0m35.499s 00:32:56.962 sys 0m3.211s 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:56.962 ************************************ 00:32:56.962 END TEST lvs_grow_dirty 00:32:56.962 ************************************ 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:56.962 nvmf_trace.0 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:56.962 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:56.962 rmmod nvme_tcp 00:32:56.962 rmmod nvme_fabrics 00:32:56.962 rmmod nvme_keyring 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3756823 ']' 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3756823 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3756823 ']' 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3756823 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3756823 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3756823' 00:32:57.223 killing process with pid 3756823 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3756823 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3756823 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.223 07:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:59.769 00:32:59.769 real 0m45.050s 00:32:59.769 user 0m54.035s 00:32:59.769 sys 0m10.918s 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:59.769 ************************************ 00:32:59.769 END TEST nvmf_lvs_grow 00:32:59.769 ************************************ 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:59.769 ************************************ 00:32:59.769 START TEST nvmf_bdev_io_wait 00:32:59.769 ************************************ 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:59.769 * Looking for test storage... 00:32:59.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.769 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:59.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.770 --rc genhtml_branch_coverage=1 00:32:59.770 --rc genhtml_function_coverage=1 00:32:59.770 --rc genhtml_legend=1 00:32:59.770 --rc geninfo_all_blocks=1 00:32:59.770 --rc geninfo_unexecuted_blocks=1 00:32:59.770 00:32:59.770 ' 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:59.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.770 --rc genhtml_branch_coverage=1 00:32:59.770 --rc genhtml_function_coverage=1 00:32:59.770 --rc genhtml_legend=1 00:32:59.770 --rc geninfo_all_blocks=1 00:32:59.770 --rc geninfo_unexecuted_blocks=1 00:32:59.770 00:32:59.770 ' 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:59.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.770 --rc genhtml_branch_coverage=1 00:32:59.770 --rc genhtml_function_coverage=1 00:32:59.770 --rc genhtml_legend=1 00:32:59.770 --rc geninfo_all_blocks=1 00:32:59.770 --rc geninfo_unexecuted_blocks=1 00:32:59.770 00:32:59.770 ' 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:59.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.770 --rc genhtml_branch_coverage=1 00:32:59.770 --rc genhtml_function_coverage=1 00:32:59.770 --rc genhtml_legend=1 00:32:59.770 --rc geninfo_all_blocks=1 00:32:59.770 --rc geninfo_unexecuted_blocks=1 00:32:59.770 00:32:59.770 ' 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:59.770 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.771 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.771 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.771 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:59.771 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:59.771 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:59.771 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:07.916 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:07.916 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:07.916 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:07.916 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:07.916 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:07.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:07.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:33:07.917 00:33:07.917 --- 10.0.0.2 ping statistics --- 00:33:07.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.917 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:07.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:07.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:33:07.917 00:33:07.917 --- 10.0.0.1 ping statistics --- 00:33:07.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.917 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3761742 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3761742 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3761742 ']' 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:07.917 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:07.917 [2024-11-20 07:31:29.462141] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:07.917 [2024-11-20 07:31:29.463270] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:33:07.917 [2024-11-20 07:31:29.463320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.917 [2024-11-20 07:31:29.561414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:07.917 [2024-11-20 07:31:29.615342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.917 [2024-11-20 07:31:29.615392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.917 [2024-11-20 07:31:29.615401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.917 [2024-11-20 07:31:29.615408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.917 [2024-11-20 07:31:29.615414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.917 [2024-11-20 07:31:29.617748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.917 [2024-11-20 07:31:29.617909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:07.917 [2024-11-20 07:31:29.618073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.917 [2024-11-20 07:31:29.618073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:07.917 [2024-11-20 07:31:29.618413] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.179 [2024-11-20 07:31:30.390544] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:08.179 [2024-11-20 07:31:30.390971] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:08.179 [2024-11-20 07:31:30.391055] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:08.179 [2024-11-20 07:31:30.391242] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.179 [2024-11-20 07:31:30.402921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.179 Malloc0 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.179 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.441 [2024-11-20 07:31:30.479304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3762089 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3762091 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:08.441 { 00:33:08.441 "params": { 00:33:08.441 "name": "Nvme$subsystem", 00:33:08.441 "trtype": "$TEST_TRANSPORT", 00:33:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:08.441 "adrfam": "ipv4", 00:33:08.441 "trsvcid": "$NVMF_PORT", 00:33:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:08.441 "hdgst": ${hdgst:-false}, 00:33:08.441 "ddgst": ${ddgst:-false} 00:33:08.441 }, 00:33:08.441 "method": "bdev_nvme_attach_controller" 00:33:08.441 } 00:33:08.441 EOF 00:33:08.441 )") 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3762093 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:08.441 { 00:33:08.441 "params": { 00:33:08.441 "name": "Nvme$subsystem", 00:33:08.441 "trtype": "$TEST_TRANSPORT", 00:33:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:08.441 "adrfam": "ipv4", 00:33:08.441 "trsvcid": "$NVMF_PORT", 00:33:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:08.441 "hdgst": ${hdgst:-false}, 00:33:08.441 "ddgst": ${ddgst:-false} 00:33:08.441 }, 00:33:08.441 "method": "bdev_nvme_attach_controller" 00:33:08.441 } 00:33:08.441 EOF 00:33:08.441 )") 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3762096 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:08.441 { 00:33:08.441 "params": { 00:33:08.441 "name": "Nvme$subsystem", 00:33:08.441 "trtype": "$TEST_TRANSPORT", 00:33:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:08.441 "adrfam": "ipv4", 00:33:08.441 "trsvcid": "$NVMF_PORT", 00:33:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:08.441 "hdgst": ${hdgst:-false}, 00:33:08.441 "ddgst": ${ddgst:-false} 00:33:08.441 }, 00:33:08.441 "method": "bdev_nvme_attach_controller" 00:33:08.441 } 00:33:08.441 EOF 00:33:08.441 )") 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:08.441 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:08.442 { 00:33:08.442 "params": { 00:33:08.442 "name": "Nvme$subsystem", 00:33:08.442 "trtype": "$TEST_TRANSPORT", 00:33:08.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:08.442 "adrfam": "ipv4", 00:33:08.442 "trsvcid": "$NVMF_PORT", 00:33:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:08.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:08.442 "hdgst": ${hdgst:-false}, 00:33:08.442 "ddgst": ${ddgst:-false} 00:33:08.442 }, 00:33:08.442 "method": "bdev_nvme_attach_controller" 00:33:08.442 } 00:33:08.442 EOF 00:33:08.442 )") 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3762089 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:08.442 "params": { 00:33:08.442 "name": "Nvme1", 00:33:08.442 "trtype": "tcp", 00:33:08.442 "traddr": "10.0.0.2", 00:33:08.442 "adrfam": "ipv4", 00:33:08.442 "trsvcid": "4420", 00:33:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:08.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:08.442 "hdgst": false, 00:33:08.442 "ddgst": false 00:33:08.442 }, 00:33:08.442 "method": "bdev_nvme_attach_controller" 00:33:08.442 }' 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:08.442 "params": { 00:33:08.442 "name": "Nvme1", 00:33:08.442 "trtype": "tcp", 00:33:08.442 "traddr": "10.0.0.2", 00:33:08.442 "adrfam": "ipv4", 00:33:08.442 "trsvcid": "4420", 00:33:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:08.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:08.442 "hdgst": false, 00:33:08.442 "ddgst": false 00:33:08.442 }, 00:33:08.442 "method": "bdev_nvme_attach_controller" 00:33:08.442 }' 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:08.442 "params": { 00:33:08.442 "name": "Nvme1", 00:33:08.442 "trtype": "tcp", 00:33:08.442 "traddr": "10.0.0.2", 00:33:08.442 "adrfam": "ipv4", 00:33:08.442 "trsvcid": "4420", 00:33:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:08.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:08.442 "hdgst": false, 00:33:08.442 "ddgst": false 00:33:08.442 }, 00:33:08.442 "method": "bdev_nvme_attach_controller" 00:33:08.442 }' 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:08.442 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:08.442 "params": { 00:33:08.442 "name": "Nvme1", 00:33:08.442 "trtype": "tcp", 00:33:08.442 "traddr": "10.0.0.2", 00:33:08.442 "adrfam": "ipv4", 00:33:08.442 "trsvcid": "4420", 00:33:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:08.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:08.442 "hdgst": false, 00:33:08.442 "ddgst": false 00:33:08.442 }, 00:33:08.442 "method": "bdev_nvme_attach_controller" 00:33:08.442 }' 00:33:08.442 [2024-11-20 07:31:30.539221] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:33:08.442 [2024-11-20 07:31:30.539228] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:33:08.442 [2024-11-20 07:31:30.539299] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 07:31:30.539299] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:08.442 --proc-type=auto ] 00:33:08.442 [2024-11-20 07:31:30.543533] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:33:08.442 [2024-11-20 07:31:30.543603] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:08.442 [2024-11-20 07:31:30.550713] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:33:08.442 [2024-11-20 07:31:30.550772] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:08.703 [2024-11-20 07:31:30.773334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.703 [2024-11-20 07:31:30.812335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:08.703 [2024-11-20 07:31:30.865723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.703 [2024-11-20 07:31:30.908562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:08.703 [2024-11-20 07:31:30.931414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.703 [2024-11-20 07:31:30.967768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:08.964 [2024-11-20 07:31:30.999764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.964 [2024-11-20 07:31:31.036165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:08.964 Running I/O for 1 seconds... 00:33:08.964 Running I/O for 1 seconds... 00:33:08.964 Running I/O for 1 seconds... 00:33:08.964 Running I/O for 1 seconds... 00:33:09.906 14762.00 IOPS, 57.66 MiB/s 00:33:09.906 Latency(us) 00:33:09.906 [2024-11-20T06:31:32.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.906 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:09.906 Nvme1n1 : 1.01 14824.08 57.91 0.00 0.00 8607.45 3399.68 11905.71 00:33:09.906 [2024-11-20T06:31:32.184Z] =================================================================================================================== 00:33:09.906 [2024-11-20T06:31:32.184Z] Total : 14824.08 57.91 0.00 0.00 8607.45 3399.68 11905.71 00:33:09.906 6184.00 IOPS, 24.16 MiB/s 00:33:09.906 Latency(us) 00:33:09.906 [2024-11-20T06:31:32.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.906 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:09.906 Nvme1n1 : 1.02 6224.46 24.31 0.00 0.00 20400.57 2471.25 28180.48 00:33:09.906 [2024-11-20T06:31:32.184Z] =================================================================================================================== 00:33:09.906 [2024-11-20T06:31:32.184Z] Total : 6224.46 24.31 0.00 0.00 20400.57 2471.25 28180.48 00:33:09.906 188352.00 IOPS, 735.75 MiB/s 00:33:09.906 Latency(us) 00:33:09.906 [2024-11-20T06:31:32.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.906 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:09.906 Nvme1n1 : 1.00 187979.72 734.30 0.00 0.00 677.29 300.37 1979.73 00:33:09.906 [2024-11-20T06:31:32.184Z] =================================================================================================================== 00:33:09.906 [2024-11-20T06:31:32.184Z] Total : 187979.72 734.30 0.00 0.00 677.29 300.37 1979.73 00:33:10.167 6272.00 IOPS, 24.50 MiB/s 00:33:10.167 Latency(us) 00:33:10.167 [2024-11-20T06:31:32.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.167 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:10.167 Nvme1n1 : 1.01 6358.58 24.84 0.00 0.00 20060.52 4833.28 39758.51 00:33:10.167 [2024-11-20T06:31:32.445Z] =================================================================================================================== 00:33:10.167 [2024-11-20T06:31:32.445Z] Total : 6358.58 24.84 0.00 0.00 20060.52 4833.28 39758.51 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3762091 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3762093 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3762096 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:10.167 rmmod nvme_tcp 00:33:10.167 rmmod nvme_fabrics 00:33:10.167 rmmod nvme_keyring 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3761742 ']' 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3761742 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3761742 ']' 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3761742 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:10.167 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3761742 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3761742' 00:33:10.428 killing process with pid 3761742 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3761742 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3761742 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.428 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:12.975 00:33:12.975 real 0m13.078s 00:33:12.975 user 0m15.765s 00:33:12.975 sys 0m7.640s 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:12.975 ************************************ 00:33:12.975 END TEST nvmf_bdev_io_wait 00:33:12.975 ************************************ 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:12.975 ************************************ 00:33:12.975 START TEST nvmf_queue_depth 00:33:12.975 ************************************ 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:12.975 * Looking for test storage... 00:33:12.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:12.975 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:12.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.975 --rc genhtml_branch_coverage=1 00:33:12.975 --rc genhtml_function_coverage=1 00:33:12.975 --rc genhtml_legend=1 00:33:12.975 --rc geninfo_all_blocks=1 00:33:12.975 --rc geninfo_unexecuted_blocks=1 00:33:12.975 00:33:12.975 ' 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:12.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.975 --rc genhtml_branch_coverage=1 00:33:12.975 --rc genhtml_function_coverage=1 00:33:12.975 --rc genhtml_legend=1 00:33:12.975 --rc geninfo_all_blocks=1 00:33:12.975 --rc geninfo_unexecuted_blocks=1 00:33:12.975 00:33:12.975 ' 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:12.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.975 --rc genhtml_branch_coverage=1 00:33:12.975 --rc genhtml_function_coverage=1 00:33:12.975 --rc genhtml_legend=1 00:33:12.975 --rc geninfo_all_blocks=1 00:33:12.975 --rc geninfo_unexecuted_blocks=1 00:33:12.975 00:33:12.975 ' 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:12.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.975 --rc genhtml_branch_coverage=1 00:33:12.975 --rc genhtml_function_coverage=1 00:33:12.975 --rc genhtml_legend=1 00:33:12.975 --rc geninfo_all_blocks=1 00:33:12.975 --rc geninfo_unexecuted_blocks=1 00:33:12.975 00:33:12.975 ' 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:12.975 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:12.976 07:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:21.114 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:21.115 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:21.115 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:21.115 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:21.115 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:21.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:21.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:33:21.115 00:33:21.115 --- 10.0.0.2 ping statistics --- 00:33:21.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.115 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:21.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:21.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:33:21.115 00:33:21.115 --- 10.0.0.1 ping statistics --- 00:33:21.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.115 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:21.115 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3766449 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3766449 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3766449 ']' 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:21.116 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:21.116 [2024-11-20 07:31:42.537741] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:21.116 [2024-11-20 07:31:42.538879] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:33:21.116 [2024-11-20 07:31:42.538929] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:21.116 [2024-11-20 07:31:42.641150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.116 [2024-11-20 07:31:42.693367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:21.116 [2024-11-20 07:31:42.693412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:21.116 [2024-11-20 07:31:42.693420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:21.116 [2024-11-20 07:31:42.693427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:21.116 [2024-11-20 07:31:42.693433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:21.116 [2024-11-20 07:31:42.694177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.116 [2024-11-20 07:31:42.769435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:21.116 [2024-11-20 07:31:42.769728] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:21.116 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:21.116 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:33:21.116 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:21.116 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:21.116 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:21.377 [2024-11-20 07:31:43.399002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:21.377 Malloc0 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:21.377 [2024-11-20 07:31:43.475201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3766801 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3766801 /var/tmp/bdevperf.sock 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3766801 ']' 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:21.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:21.377 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:21.377 [2024-11-20 07:31:43.533395] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:33:21.377 [2024-11-20 07:31:43.533459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3766801 ] 00:33:21.377 [2024-11-20 07:31:43.626407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.637 [2024-11-20 07:31:43.679640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.208 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:22.208 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:33:22.208 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:22.208 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.208 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:22.469 NVMe0n1 00:33:22.469 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.469 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:22.469 Running I/O for 10 seconds... 00:33:24.793 8198.00 IOPS, 32.02 MiB/s [2024-11-20T06:31:48.009Z] 8704.00 IOPS, 34.00 MiB/s [2024-11-20T06:31:48.950Z] 9210.67 IOPS, 35.98 MiB/s [2024-11-20T06:31:49.889Z] 10227.00 IOPS, 39.95 MiB/s [2024-11-20T06:31:50.829Z] 10850.00 IOPS, 42.38 MiB/s [2024-11-20T06:31:51.769Z] 11267.00 IOPS, 44.01 MiB/s [2024-11-20T06:31:52.836Z] 11568.29 IOPS, 45.19 MiB/s [2024-11-20T06:31:53.779Z] 11843.38 IOPS, 46.26 MiB/s [2024-11-20T06:31:54.719Z] 12035.00 IOPS, 47.01 MiB/s [2024-11-20T06:31:54.980Z] 12187.00 IOPS, 47.61 MiB/s 00:33:32.702 Latency(us) 00:33:32.702 [2024-11-20T06:31:54.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.702 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:32.702 Verification LBA range: start 0x0 length 0x4000 00:33:32.702 NVMe0n1 : 10.06 12209.79 47.69 0.00 0.00 83581.44 25122.13 76021.76 00:33:32.702 [2024-11-20T06:31:54.980Z] =================================================================================================================== 00:33:32.702 [2024-11-20T06:31:54.980Z] Total : 12209.79 47.69 0.00 0.00 83581.44 25122.13 76021.76 00:33:32.702 { 00:33:32.702 "results": [ 00:33:32.702 { 00:33:32.702 "job": "NVMe0n1", 00:33:32.702 "core_mask": "0x1", 00:33:32.702 "workload": "verify", 00:33:32.702 "status": "finished", 00:33:32.702 "verify_range": { 00:33:32.702 "start": 0, 00:33:32.702 "length": 16384 00:33:32.702 }, 00:33:32.702 "queue_depth": 1024, 00:33:32.702 "io_size": 4096, 00:33:32.702 "runtime": 10.061349, 00:33:32.702 "iops": 12209.794133967524, 00:33:32.702 "mibps": 47.69450833581064, 00:33:32.702 "io_failed": 0, 00:33:32.702 "io_timeout": 0, 00:33:32.702 "avg_latency_us": 83581.43818701312, 00:33:32.702 "min_latency_us": 25122.133333333335, 00:33:32.702 "max_latency_us": 76021.76 00:33:32.702 } 00:33:32.702 ], 00:33:32.702 "core_count": 1 00:33:32.702 } 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3766801 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3766801 ']' 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3766801 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3766801 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3766801' 00:33:32.702 killing process with pid 3766801 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3766801 00:33:32.702 Received shutdown signal, test time was about 10.000000 seconds 00:33:32.702 00:33:32.702 Latency(us) 00:33:32.702 [2024-11-20T06:31:54.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.702 [2024-11-20T06:31:54.980Z] =================================================================================================================== 00:33:32.702 [2024-11-20T06:31:54.980Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3766801 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:32.702 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:32.702 rmmod nvme_tcp 00:33:32.702 rmmod nvme_fabrics 00:33:32.963 rmmod nvme_keyring 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3766449 ']' 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3766449 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3766449 ']' 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3766449 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3766449 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3766449' 00:33:32.963 killing process with pid 3766449 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3766449 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3766449 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:32.963 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.505 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:35.506 00:33:35.506 real 0m22.466s 00:33:35.506 user 0m24.816s 00:33:35.506 sys 0m7.389s 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:35.506 ************************************ 00:33:35.506 END TEST nvmf_queue_depth 00:33:35.506 ************************************ 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:35.506 ************************************ 00:33:35.506 START TEST nvmf_target_multipath 00:33:35.506 ************************************ 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:35.506 * Looking for test storage... 00:33:35.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:35.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.506 --rc genhtml_branch_coverage=1 00:33:35.506 --rc genhtml_function_coverage=1 00:33:35.506 --rc genhtml_legend=1 00:33:35.506 --rc geninfo_all_blocks=1 00:33:35.506 --rc geninfo_unexecuted_blocks=1 00:33:35.506 00:33:35.506 ' 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:35.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.506 --rc genhtml_branch_coverage=1 00:33:35.506 --rc genhtml_function_coverage=1 00:33:35.506 --rc genhtml_legend=1 00:33:35.506 --rc geninfo_all_blocks=1 00:33:35.506 --rc geninfo_unexecuted_blocks=1 00:33:35.506 00:33:35.506 ' 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:35.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.506 --rc genhtml_branch_coverage=1 00:33:35.506 --rc genhtml_function_coverage=1 00:33:35.506 --rc genhtml_legend=1 00:33:35.506 --rc geninfo_all_blocks=1 00:33:35.506 --rc geninfo_unexecuted_blocks=1 00:33:35.506 00:33:35.506 ' 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:35.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.506 --rc genhtml_branch_coverage=1 00:33:35.506 --rc genhtml_function_coverage=1 00:33:35.506 --rc genhtml_legend=1 00:33:35.506 --rc geninfo_all_blocks=1 00:33:35.506 --rc geninfo_unexecuted_blocks=1 00:33:35.506 00:33:35.506 ' 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:35.506 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:35.507 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:43.646 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:43.646 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:43.646 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:43.646 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:43.647 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:43.647 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:43.647 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:43.647 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:43.647 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:43.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:43.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:33:43.648 00:33:43.648 --- 10.0.0.2 ping statistics --- 00:33:43.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.648 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:43.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:43.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:33:43.648 00:33:43.648 --- 10.0.0.1 ping statistics --- 00:33:43.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.648 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:43.648 only one NIC for nvmf test 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:43.648 rmmod nvme_tcp 00:33:43.648 rmmod nvme_fabrics 00:33:43.648 rmmod nvme_keyring 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.648 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.034 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:45.034 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:45.034 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:45.034 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:45.034 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:45.034 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:45.034 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:45.034 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:45.034 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:45.034 00:33:45.034 real 0m9.677s 00:33:45.034 user 0m2.070s 00:33:45.034 sys 0m5.555s 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:45.034 ************************************ 00:33:45.034 END TEST nvmf_target_multipath 00:33:45.034 ************************************ 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:45.034 ************************************ 00:33:45.034 START TEST nvmf_zcopy 00:33:45.034 ************************************ 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:45.034 * Looking for test storage... 00:33:45.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:45.034 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:33:45.035 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:45.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.296 --rc genhtml_branch_coverage=1 00:33:45.296 --rc genhtml_function_coverage=1 00:33:45.296 --rc genhtml_legend=1 00:33:45.296 --rc geninfo_all_blocks=1 00:33:45.296 --rc geninfo_unexecuted_blocks=1 00:33:45.296 00:33:45.296 ' 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:45.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.296 --rc genhtml_branch_coverage=1 00:33:45.296 --rc genhtml_function_coverage=1 00:33:45.296 --rc genhtml_legend=1 00:33:45.296 --rc geninfo_all_blocks=1 00:33:45.296 --rc geninfo_unexecuted_blocks=1 00:33:45.296 00:33:45.296 ' 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:45.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.296 --rc genhtml_branch_coverage=1 00:33:45.296 --rc genhtml_function_coverage=1 00:33:45.296 --rc genhtml_legend=1 00:33:45.296 --rc geninfo_all_blocks=1 00:33:45.296 --rc geninfo_unexecuted_blocks=1 00:33:45.296 00:33:45.296 ' 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:45.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.296 --rc genhtml_branch_coverage=1 00:33:45.296 --rc genhtml_function_coverage=1 00:33:45.296 --rc genhtml_legend=1 00:33:45.296 --rc geninfo_all_blocks=1 00:33:45.296 --rc geninfo_unexecuted_blocks=1 00:33:45.296 00:33:45.296 ' 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.296 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:45.297 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:53.436 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:53.437 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:53.437 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:53.437 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:53.437 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:53.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:53.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:33:53.437 00:33:53.437 --- 10.0.0.2 ping statistics --- 00:33:53.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.437 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:53.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:53.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:33:53.437 00:33:53.437 --- 10.0.0.1 ping statistics --- 00:33:53.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.437 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:53.437 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:53.438 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:53.438 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:53.438 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:53.438 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3777151 00:33:53.438 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3777151 00:33:53.438 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:53.438 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3777151 ']' 00:33:53.438 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.438 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:53.438 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.438 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:53.438 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:53.438 [2024-11-20 07:32:14.725175] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:53.438 [2024-11-20 07:32:14.726312] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:33:53.438 [2024-11-20 07:32:14.726363] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:53.438 [2024-11-20 07:32:14.826584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.438 [2024-11-20 07:32:14.876502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:53.438 [2024-11-20 07:32:14.876551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:53.438 [2024-11-20 07:32:14.876559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:53.438 [2024-11-20 07:32:14.876566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:53.438 [2024-11-20 07:32:14.876573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:53.438 [2024-11-20 07:32:14.877349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.438 [2024-11-20 07:32:14.954206] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:53.438 [2024-11-20 07:32:14.954495] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:53.438 [2024-11-20 07:32:15.590211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:53.438 [2024-11-20 07:32:15.618493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:53.438 malloc0 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:53.438 { 00:33:53.438 "params": { 00:33:53.438 "name": "Nvme$subsystem", 00:33:53.438 "trtype": "$TEST_TRANSPORT", 00:33:53.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.438 "adrfam": "ipv4", 00:33:53.438 "trsvcid": "$NVMF_PORT", 00:33:53.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.438 "hdgst": ${hdgst:-false}, 00:33:53.438 "ddgst": ${ddgst:-false} 00:33:53.438 }, 00:33:53.438 "method": "bdev_nvme_attach_controller" 00:33:53.438 } 00:33:53.438 EOF 00:33:53.438 )") 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:53.438 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:53.438 "params": { 00:33:53.438 "name": "Nvme1", 00:33:53.438 "trtype": "tcp", 00:33:53.438 "traddr": "10.0.0.2", 00:33:53.438 "adrfam": "ipv4", 00:33:53.438 "trsvcid": "4420", 00:33:53.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:53.438 "hdgst": false, 00:33:53.438 "ddgst": false 00:33:53.438 }, 00:33:53.438 "method": "bdev_nvme_attach_controller" 00:33:53.438 }' 00:33:53.700 [2024-11-20 07:32:15.726127] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:33:53.700 [2024-11-20 07:32:15.726200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3777458 ] 00:33:53.700 [2024-11-20 07:32:15.821271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.700 [2024-11-20 07:32:15.874165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.961 Running I/O for 10 seconds... 00:33:55.848 6435.00 IOPS, 50.27 MiB/s [2024-11-20T06:32:19.512Z] 6478.00 IOPS, 50.61 MiB/s [2024-11-20T06:32:20.457Z] 6493.33 IOPS, 50.73 MiB/s [2024-11-20T06:32:21.401Z] 6518.00 IOPS, 50.92 MiB/s [2024-11-20T06:32:22.342Z] 7148.60 IOPS, 55.85 MiB/s [2024-11-20T06:32:23.283Z] 7584.83 IOPS, 59.26 MiB/s [2024-11-20T06:32:24.227Z] 7897.86 IOPS, 61.70 MiB/s [2024-11-20T06:32:25.167Z] 8135.75 IOPS, 63.56 MiB/s [2024-11-20T06:32:26.550Z] 8312.56 IOPS, 64.94 MiB/s [2024-11-20T06:32:26.550Z] 8458.90 IOPS, 66.09 MiB/s 00:34:04.272 Latency(us) 00:34:04.272 [2024-11-20T06:32:26.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.272 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:04.272 Verification LBA range: start 0x0 length 0x1000 00:34:04.272 Nvme1n1 : 10.01 8462.32 66.11 0.00 0.00 15079.75 2566.83 27743.57 00:34:04.272 [2024-11-20T06:32:26.550Z] =================================================================================================================== 00:34:04.272 [2024-11-20T06:32:26.550Z] Total : 8462.32 66.11 0.00 0.00 15079.75 2566.83 27743.57 00:34:04.272 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3779350 00:34:04.272 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:04.272 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.272 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:04.272 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:04.272 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:04.272 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:04.272 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:04.272 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:04.272 { 00:34:04.272 "params": { 00:34:04.272 "name": "Nvme$subsystem", 00:34:04.272 "trtype": "$TEST_TRANSPORT", 00:34:04.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:04.272 "adrfam": "ipv4", 00:34:04.272 "trsvcid": "$NVMF_PORT", 00:34:04.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:04.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:04.272 "hdgst": ${hdgst:-false}, 00:34:04.272 "ddgst": ${ddgst:-false} 00:34:04.272 }, 00:34:04.272 "method": "bdev_nvme_attach_controller" 00:34:04.272 } 00:34:04.272 EOF 00:34:04.272 )") 00:34:04.272 [2024-11-20 07:32:26.217747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.272 [2024-11-20 07:32:26.217778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.272 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:04.272 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:04.272 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:04.272 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:04.272 "params": { 00:34:04.272 "name": "Nvme1", 00:34:04.272 "trtype": "tcp", 00:34:04.272 "traddr": "10.0.0.2", 00:34:04.272 "adrfam": "ipv4", 00:34:04.272 "trsvcid": "4420", 00:34:04.272 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:04.272 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:04.272 "hdgst": false, 00:34:04.272 "ddgst": false 00:34:04.272 }, 00:34:04.272 "method": "bdev_nvme_attach_controller" 00:34:04.272 }' 00:34:04.272 [2024-11-20 07:32:26.229717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.272 [2024-11-20 07:32:26.229726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.272 [2024-11-20 07:32:26.241715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.272 [2024-11-20 07:32:26.241728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.272 [2024-11-20 07:32:26.253715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.272 [2024-11-20 07:32:26.253723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.272 [2024-11-20 07:32:26.263249] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:34:04.272 [2024-11-20 07:32:26.263296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779350 ] 00:34:04.272 [2024-11-20 07:32:26.265715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.272 [2024-11-20 07:32:26.265723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.272 [2024-11-20 07:32:26.277714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.272 [2024-11-20 07:32:26.277722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.272 [2024-11-20 07:32:26.289715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.272 [2024-11-20 07:32:26.289722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.272 [2024-11-20 07:32:26.301715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.272 [2024-11-20 07:32:26.301723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.272 [2024-11-20 07:32:26.313714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.272 [2024-11-20 07:32:26.313722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.272 [2024-11-20 07:32:26.325715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.272 [2024-11-20 07:32:26.325722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.272 [2024-11-20 07:32:26.337714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.337721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.346361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.273 [2024-11-20 07:32:26.349715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.349723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.361716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.361725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.373715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.373725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.376060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.273 [2024-11-20 07:32:26.385719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.385727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.397720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.397733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.409718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.409730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.421716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.421727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.433717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.433731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.445722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.445737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.457716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.457725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.469716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.469725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.481714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.481721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.493714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.493721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.505714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.505721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.517715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.517724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.529716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.529726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.273 [2024-11-20 07:32:26.541719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.273 [2024-11-20 07:32:26.541735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 Running I/O for 5 seconds... 00:34:04.533 [2024-11-20 07:32:26.558929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.558945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.572918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.572934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.586189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.586206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.601000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.601015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.613716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.613731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.626508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.626523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.640865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.640880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.654124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.654138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.668705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.668720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.681879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.681899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.695038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.695053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.709277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.709292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.722174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.722188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.737148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.737166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.750219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.750233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.765156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.765176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.778150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.778167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.792483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.792498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.533 [2024-11-20 07:32:26.805549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.533 [2024-11-20 07:32:26.805564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:26.818349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:26.818364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:26.832792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:26.832807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:26.845860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:26.845875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:26.858725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:26.858740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:26.872866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:26.872881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:26.886089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:26.886103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:26.900702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:26.900717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:26.913950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:26.913965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:26.926771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:26.926786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:26.941015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:26.941031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:26.954008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:26.954022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:26.968574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:26.968589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:26.981443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:26.981458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:26.994208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:26.994223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:27.008752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:27.008767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:27.021942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:27.021958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:27.034840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:27.034854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:27.049435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:27.049450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.793 [2024-11-20 07:32:27.062295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.793 [2024-11-20 07:32:27.062310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.077317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.077332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.090377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.090391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.104791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.104806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.117706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.117720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.130637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.130651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.145034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.145048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.158136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.158150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.172802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.172816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.185927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.185941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.198944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.198959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.213485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.213499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.226513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.226527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.241206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.241221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.254400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.254414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.268988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.269003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.282230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.282244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.296718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.296732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.309730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.309744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.053 [2024-11-20 07:32:27.322476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.053 [2024-11-20 07:32:27.322490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.337290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.337305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.350380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.350395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.364657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.364672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.377619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.377634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.390574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.390588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.404754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.404769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.417596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.417611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.430502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.430516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.444808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.444822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.457825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.457840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.471023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.471038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.485329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.485344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.498567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.498581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.512943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.512957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.526198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.526212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.541194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.541209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 19031.00 IOPS, 148.68 MiB/s [2024-11-20T06:32:27.592Z] [2024-11-20 07:32:27.554082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.554096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.568871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.568885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.314 [2024-11-20 07:32:27.581486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.314 [2024-11-20 07:32:27.581500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.593975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.593989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.609112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.609127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.622265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.622280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.636581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.636596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.649487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.649503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.663031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.663046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.677111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.677126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.690339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.690353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.705195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.705215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.717927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.717941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.730799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.730815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.745364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.745379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.758366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.758381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.773537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.773552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.786400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.786414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.800740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.800756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.813678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.813693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.827038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.827053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.635 [2024-11-20 07:32:27.841012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.635 [2024-11-20 07:32:27.841027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:27.853496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:27.853512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:27.866084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:27.866098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:27.880950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:27.880965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:27.894025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:27.894039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:27.909052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:27.909066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:27.922270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:27.922285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:27.936737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:27.936753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:27.949806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:27.949821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:27.962779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:27.962798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:27.977205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:27.977221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:27.990101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:27.990115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:28.004505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:28.004519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:28.017454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:28.017469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:28.030289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:28.030304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:28.044608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:28.044623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:28.057356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:28.057371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:28.070619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:28.070634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:28.084589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:28.084604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:28.097656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:28.097673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:28.110327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:28.110342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:28.124854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:28.124869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:28.137913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:28.137927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:28.150899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:28.150914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.896 [2024-11-20 07:32:28.164733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.896 [2024-11-20 07:32:28.164747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.177844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.177859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.190491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.190507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.205075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.205090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.218203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.218222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.232722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.232738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.245935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.245950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.258754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.258769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.272719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.272734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.285811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.285825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.298702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.298716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.312943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.312957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.325854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.325869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.338682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.338697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.353269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.353284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.366364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.366378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.380766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.380781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.393713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.393728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.406561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.406576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.156 [2024-11-20 07:32:28.420691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.156 [2024-11-20 07:32:28.420706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.433711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.433726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.446469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.446483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.461003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.461017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.474100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.474117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.488676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.488690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.501558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.501573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.514435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.514449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.528777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.528791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.541901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.541916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.554698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.554712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 19081.00 IOPS, 149.07 MiB/s [2024-11-20T06:32:28.694Z] [2024-11-20 07:32:28.569080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.569095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.581983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.581997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.596719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.596733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.610185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.610199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.625153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.625172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.637990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.638004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.652673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.652687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.665604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.665618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.416 [2024-11-20 07:32:28.678391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.416 [2024-11-20 07:32:28.678405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.693127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.693143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.706198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.706212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.721014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.721029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.734028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.734042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.748613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.748627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.761541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.761555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.774824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.774838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.789123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.789140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.802034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.802048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.817004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.817019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.829949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.829963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.842686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.842700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.857042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.857056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.870173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.870188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.885173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.885188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.898465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.898479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.912500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.912515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.925698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.925712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.677 [2024-11-20 07:32:28.938362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.677 [2024-11-20 07:32:28.938376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:28.952718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:28.952732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:28.965817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:28.965832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:28.978657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:28.978671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:28.993221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:28.993235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.006445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.006459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.020744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.020759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.033632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.033646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.046875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.046889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.060773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.060788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.073508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.073523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.086984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.086999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.100851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.100866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.114239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.114253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.129037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.129051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.142369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.142384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.157191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.157206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.169614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.169628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.182693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.182707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.197548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.197563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.937 [2024-11-20 07:32:29.210605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.937 [2024-11-20 07:32:29.210619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.225226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.225241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.238185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.238199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.253051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.253065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.265875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.265890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.278567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.278582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.292979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.292994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.305993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.306007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.320882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.320897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.334249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.334263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.348990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.349005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.361941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.361956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.374756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.374770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.388880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.388894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.401798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.401812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.414225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.414242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.428957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.428972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.442175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.442189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.456803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.456818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.196 [2024-11-20 07:32:29.470030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.196 [2024-11-20 07:32:29.470044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.484689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.484703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.497384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.497403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.510430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.510444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.524645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.524659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.537658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.537673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.551091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.551106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 19105.00 IOPS, 149.26 MiB/s [2024-11-20T06:32:29.734Z] [2024-11-20 07:32:29.565336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.565350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.578506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.578520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.592850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.592864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.606122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.606136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.620575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.620590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.633355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.633370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.646650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.646665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.661297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.661312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.674360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.674374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.688748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.688763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.702139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.702153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.716872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.716887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.456 [2024-11-20 07:32:29.729953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.456 [2024-11-20 07:32:29.729968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.716 [2024-11-20 07:32:29.742773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.716 [2024-11-20 07:32:29.742788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.716 [2024-11-20 07:32:29.757242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.716 [2024-11-20 07:32:29.757261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.716 [2024-11-20 07:32:29.770180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.716 [2024-11-20 07:32:29.770195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.785078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.785093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.797923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.797938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.810729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.810744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.824978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.824992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.837920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.837935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.851320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.851335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.865063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.865078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.878039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.878053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.893203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.893218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.906073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.906088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.921129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.921144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.934115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.934130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.949014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.949029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.962074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.962088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.977210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.977225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.717 [2024-11-20 07:32:29.990153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.717 [2024-11-20 07:32:29.990173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.005345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.005361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.018690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.018709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.033482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.033497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.046505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.046519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.061259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.061274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.074502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.074516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.089449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.089463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.102347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.102362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.116813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.116827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.129926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.129941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.142912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.142927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.157107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.157122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.170291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.170305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.185461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.185474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.198433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.198447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.212815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.212829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.226015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.226029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.978 [2024-11-20 07:32:30.240882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.978 [2024-11-20 07:32:30.240896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.239 [2024-11-20 07:32:30.254007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.239 [2024-11-20 07:32:30.254021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.239 [2024-11-20 07:32:30.268610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.239 [2024-11-20 07:32:30.268625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.239 [2024-11-20 07:32:30.281587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.239 [2024-11-20 07:32:30.281602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.239 [2024-11-20 07:32:30.294950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.239 [2024-11-20 07:32:30.294964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.239 [2024-11-20 07:32:30.309422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.239 [2024-11-20 07:32:30.309436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.239 [2024-11-20 07:32:30.322366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.239 [2024-11-20 07:32:30.322380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.239 [2024-11-20 07:32:30.337463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.239 [2024-11-20 07:32:30.337478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.239 [2024-11-20 07:32:30.350522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.239 [2024-11-20 07:32:30.350536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.239 [2024-11-20 07:32:30.364874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.239 [2024-11-20 07:32:30.364888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.239 [2024-11-20 07:32:30.378014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.239 [2024-11-20 07:32:30.378028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.239 [2024-11-20 07:32:30.392531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.239 [2024-11-20 07:32:30.392545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.239 [2024-11-20 07:32:30.405492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.239 [2024-11-20 07:32:30.405507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.240 [2024-11-20 07:32:30.418395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.240 [2024-11-20 07:32:30.418409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.240 [2024-11-20 07:32:30.432449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.240 [2024-11-20 07:32:30.432463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.240 [2024-11-20 07:32:30.445124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.240 [2024-11-20 07:32:30.445138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.240 [2024-11-20 07:32:30.458485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.240 [2024-11-20 07:32:30.458500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.240 [2024-11-20 07:32:30.473030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.240 [2024-11-20 07:32:30.473045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.240 [2024-11-20 07:32:30.486105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.240 [2024-11-20 07:32:30.486119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.240 [2024-11-20 07:32:30.500913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.240 [2024-11-20 07:32:30.500928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.514027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.514041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.528403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.528417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.541207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.541222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.554094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.554108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 19099.25 IOPS, 149.21 MiB/s [2024-11-20T06:32:30.779Z] [2024-11-20 07:32:30.568375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.568390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.581303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.581317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.594088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.594102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.608734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.608749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.621817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.621831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.634771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.634785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.648747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.648762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.661627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.661642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.674381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.674395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.689013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.689028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.702236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.702250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.716560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.716574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.729527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.729541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.742733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.742747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.756743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.756757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.501 [2024-11-20 07:32:30.769919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.501 [2024-11-20 07:32:30.769933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.762 [2024-11-20 07:32:30.782836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.762 [2024-11-20 07:32:30.782858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.762 [2024-11-20 07:32:30.797008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.762 [2024-11-20 07:32:30.797023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.762 [2024-11-20 07:32:30.809923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.762 [2024-11-20 07:32:30.809938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.762 [2024-11-20 07:32:30.822422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.762 [2024-11-20 07:32:30.822436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.762 [2024-11-20 07:32:30.836636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.762 [2024-11-20 07:32:30.836650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.762 [2024-11-20 07:32:30.849651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.762 [2024-11-20 07:32:30.849666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.762 [2024-11-20 07:32:30.862275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.762 [2024-11-20 07:32:30.862289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.762 [2024-11-20 07:32:30.876980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.762 [2024-11-20 07:32:30.876994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.762 [2024-11-20 07:32:30.890062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.762 [2024-11-20 07:32:30.890076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.762 [2024-11-20 07:32:30.905062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.762 [2024-11-20 07:32:30.905076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.762 [2024-11-20 07:32:30.918194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.763 [2024-11-20 07:32:30.918209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.763 [2024-11-20 07:32:30.932842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.763 [2024-11-20 07:32:30.932857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.763 [2024-11-20 07:32:30.945736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.763 [2024-11-20 07:32:30.945750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.763 [2024-11-20 07:32:30.958904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.763 [2024-11-20 07:32:30.958918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.763 [2024-11-20 07:32:30.973002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.763 [2024-11-20 07:32:30.973016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.763 [2024-11-20 07:32:30.985963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.763 [2024-11-20 07:32:30.985978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.763 [2024-11-20 07:32:30.998927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.763 [2024-11-20 07:32:30.998942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.763 [2024-11-20 07:32:31.013008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.763 [2024-11-20 07:32:31.013023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.763 [2024-11-20 07:32:31.025711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.763 [2024-11-20 07:32:31.025726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.024 [2024-11-20 07:32:31.038925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.024 [2024-11-20 07:32:31.038943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.024 [2024-11-20 07:32:31.052986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.024 [2024-11-20 07:32:31.053000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.024 [2024-11-20 07:32:31.066196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.024 [2024-11-20 07:32:31.066210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.024 [2024-11-20 07:32:31.081008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.024 [2024-11-20 07:32:31.081022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.024 [2024-11-20 07:32:31.093921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.024 [2024-11-20 07:32:31.093935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.024 [2024-11-20 07:32:31.106710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.024 [2024-11-20 07:32:31.106724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.024 [2024-11-20 07:32:31.121323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.024 [2024-11-20 07:32:31.121338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.024 [2024-11-20 07:32:31.134462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.024 [2024-11-20 07:32:31.134477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.024 [2024-11-20 07:32:31.148717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.024 [2024-11-20 07:32:31.148732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.024 [2024-11-20 07:32:31.162030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.024 [2024-11-20 07:32:31.162045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.024 [2024-11-20 07:32:31.176759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.024 [2024-11-20 07:32:31.176775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.024 [2024-11-20 07:32:31.189584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.024 [2024-11-20 07:32:31.189599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.024 [2024-11-20 07:32:31.202974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.024 [2024-11-20 07:32:31.202989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.024 [2024-11-20 07:32:31.216947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.025 [2024-11-20 07:32:31.216961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.025 [2024-11-20 07:32:31.229852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.025 [2024-11-20 07:32:31.229867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.025 [2024-11-20 07:32:31.242766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.025 [2024-11-20 07:32:31.242781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.025 [2024-11-20 07:32:31.257242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.025 [2024-11-20 07:32:31.257256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.025 [2024-11-20 07:32:31.270334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.025 [2024-11-20 07:32:31.270348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.025 [2024-11-20 07:32:31.284847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.025 [2024-11-20 07:32:31.284862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.025 [2024-11-20 07:32:31.297946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.025 [2024-11-20 07:32:31.297965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.310773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.310788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.325360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.325375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.338388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.338402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.352694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.352710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.365676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.365691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.379054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.379068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.392494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.392509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.405452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.405467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.417870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.417884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.430564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.430579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.445239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.445254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.458424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.458439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.472826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.472841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.485615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.485629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.498469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.498483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.512930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.512944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.525810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.525824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.538461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.538476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.285 [2024-11-20 07:32:31.553253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.285 [2024-11-20 07:32:31.553272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.546 19102.40 IOPS, 149.24 MiB/s [2024-11-20T06:32:31.824Z] [2024-11-20 07:32:31.565581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.546 [2024-11-20 07:32:31.565596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.546 00:34:09.546 Latency(us) 00:34:09.546 [2024-11-20T06:32:31.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.546 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:09.546 Nvme1n1 : 5.01 19106.12 149.27 0.00 0.00 6693.31 2744.32 11687.25 00:34:09.546 [2024-11-20T06:32:31.824Z] =================================================================================================================== 00:34:09.546 [2024-11-20T06:32:31.824Z] Total : 19106.12 149.27 0.00 0.00 6693.31 2744.32 11687.25 00:34:09.546 [2024-11-20 07:32:31.573718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.546 [2024-11-20 07:32:31.573731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.546 [2024-11-20 07:32:31.585722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.546 [2024-11-20 07:32:31.585737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.546 [2024-11-20 07:32:31.597724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.546 [2024-11-20 07:32:31.597738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.546 [2024-11-20 07:32:31.609722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.546 [2024-11-20 07:32:31.609737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.546 [2024-11-20 07:32:31.621719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.546 [2024-11-20 07:32:31.621730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.546 [2024-11-20 07:32:31.633716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.546 [2024-11-20 07:32:31.633727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.546 [2024-11-20 07:32:31.645717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.546 [2024-11-20 07:32:31.645725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.546 [2024-11-20 07:32:31.657717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.546 [2024-11-20 07:32:31.657726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.546 [2024-11-20 07:32:31.669716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.546 [2024-11-20 07:32:31.669724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3779350) - No such process 00:34:09.546 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3779350 00:34:09.546 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:09.546 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.546 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:09.546 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.546 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:09.546 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.546 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:09.546 delay0 00:34:09.546 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.546 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:09.546 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.546 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:09.546 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.546 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:09.806 [2024-11-20 07:32:31.834549] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:17.941 Initializing NVMe Controllers 00:34:17.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:17.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:17.941 Initialization complete. Launching workers. 00:34:17.941 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 233, failed: 34740 00:34:17.941 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 34850, failed to submit 123 00:34:17.941 success 34775, unsuccessful 75, failed 0 00:34:17.941 07:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:17.941 07:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:17.941 07:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:17.941 07:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:17.941 07:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:17.941 07:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:17.941 07:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:17.941 07:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:17.941 rmmod nvme_tcp 00:34:17.941 rmmod nvme_fabrics 00:34:17.941 rmmod nvme_keyring 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3777151 ']' 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3777151 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3777151 ']' 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3777151 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3777151 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3777151' 00:34:17.941 killing process with pid 3777151 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3777151 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3777151 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.941 07:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:19.325 00:34:19.325 real 0m34.205s 00:34:19.325 user 0m43.655s 00:34:19.325 sys 0m12.534s 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:19.325 ************************************ 00:34:19.325 END TEST nvmf_zcopy 00:34:19.325 ************************************ 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:19.325 ************************************ 00:34:19.325 START TEST nvmf_nmic 00:34:19.325 ************************************ 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:19.325 * Looking for test storage... 00:34:19.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:19.325 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:19.326 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:19.326 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:19.326 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:19.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.326 --rc genhtml_branch_coverage=1 00:34:19.326 --rc genhtml_function_coverage=1 00:34:19.326 --rc genhtml_legend=1 00:34:19.326 --rc geninfo_all_blocks=1 00:34:19.326 --rc geninfo_unexecuted_blocks=1 00:34:19.326 00:34:19.326 ' 00:34:19.326 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:19.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.326 --rc genhtml_branch_coverage=1 00:34:19.326 --rc genhtml_function_coverage=1 00:34:19.326 --rc genhtml_legend=1 00:34:19.326 --rc geninfo_all_blocks=1 00:34:19.326 --rc geninfo_unexecuted_blocks=1 00:34:19.326 00:34:19.326 ' 00:34:19.326 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:19.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.326 --rc genhtml_branch_coverage=1 00:34:19.326 --rc genhtml_function_coverage=1 00:34:19.326 --rc genhtml_legend=1 00:34:19.326 --rc geninfo_all_blocks=1 00:34:19.326 --rc geninfo_unexecuted_blocks=1 00:34:19.326 00:34:19.326 ' 00:34:19.326 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:19.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.326 --rc genhtml_branch_coverage=1 00:34:19.326 --rc genhtml_function_coverage=1 00:34:19.326 --rc genhtml_legend=1 00:34:19.326 --rc geninfo_all_blocks=1 00:34:19.326 --rc geninfo_unexecuted_blocks=1 00:34:19.326 00:34:19.326 ' 00:34:19.326 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:19.587 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.729 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:27.729 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:27.729 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:27.729 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:27.729 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:27.729 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:27.729 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:27.729 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:27.729 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:27.729 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:27.729 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:27.729 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:27.729 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:27.730 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:27.730 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:27.730 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:27.730 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:27.730 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:27.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:27.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:34:27.731 00:34:27.731 --- 10.0.0.2 ping statistics --- 00:34:27.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:27.731 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:27.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:27.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:34:27.731 00:34:27.731 --- 10.0.0.1 ping statistics --- 00:34:27.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:27.731 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:27.731 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3785842 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3785842 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3785842 ']' 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:27.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.731 [2024-11-20 07:32:49.066134] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:27.731 [2024-11-20 07:32:49.067114] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:34:27.731 [2024-11-20 07:32:49.067151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:27.731 [2024-11-20 07:32:49.163085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:27.731 [2024-11-20 07:32:49.211052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:27.731 [2024-11-20 07:32:49.211097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:27.731 [2024-11-20 07:32:49.211105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:27.731 [2024-11-20 07:32:49.211113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:27.731 [2024-11-20 07:32:49.211119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:27.731 [2024-11-20 07:32:49.213226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:27.731 [2024-11-20 07:32:49.213454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.731 [2024-11-20 07:32:49.213455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:27.731 [2024-11-20 07:32:49.213293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:27.731 [2024-11-20 07:32:49.287365] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:27.731 [2024-11-20 07:32:49.288710] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:27.731 [2024-11-20 07:32:49.288778] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:27.731 [2024-11-20 07:32:49.289267] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:27.731 [2024-11-20 07:32:49.289321] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.731 [2024-11-20 07:32:49.938318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.731 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.993 Malloc0 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.993 [2024-11-20 07:32:50.038657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:27.993 test case1: single bdev can't be used in multiple subsystems 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.993 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.994 [2024-11-20 07:32:50.073922] bdev.c:8254:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:27.994 [2024-11-20 07:32:50.073949] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:27.994 [2024-11-20 07:32:50.073958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.994 request: 00:34:27.994 { 00:34:27.994 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:27.994 "namespace": { 00:34:27.994 "bdev_name": "Malloc0", 00:34:27.994 "no_auto_visible": false 00:34:27.994 }, 00:34:27.994 "method": "nvmf_subsystem_add_ns", 00:34:27.994 "req_id": 1 00:34:27.994 } 00:34:27.994 Got JSON-RPC error response 00:34:27.994 response: 00:34:27.994 { 00:34:27.994 "code": -32602, 00:34:27.994 "message": "Invalid parameters" 00:34:27.994 } 00:34:27.994 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:27.994 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:27.994 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:27.994 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:27.994 Adding namespace failed - expected result. 00:34:27.994 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:27.994 test case2: host connect to nvmf target in multiple paths 00:34:27.994 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:27.994 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.994 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:27.994 [2024-11-20 07:32:50.086086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:27.994 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.994 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:28.565 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:28.827 07:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:28.827 07:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:34:28.827 07:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:28.827 07:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:34:28.827 07:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:34:31.374 07:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:31.374 07:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:31.374 07:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:31.374 07:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:34:31.374 07:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:31.374 07:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:34:31.374 07:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:31.374 [global] 00:34:31.374 thread=1 00:34:31.374 invalidate=1 00:34:31.374 rw=write 00:34:31.374 time_based=1 00:34:31.374 runtime=1 00:34:31.374 ioengine=libaio 00:34:31.374 direct=1 00:34:31.374 bs=4096 00:34:31.374 iodepth=1 00:34:31.374 norandommap=0 00:34:31.374 numjobs=1 00:34:31.374 00:34:31.374 verify_dump=1 00:34:31.374 verify_backlog=512 00:34:31.374 verify_state_save=0 00:34:31.374 do_verify=1 00:34:31.374 verify=crc32c-intel 00:34:31.374 [job0] 00:34:31.374 filename=/dev/nvme0n1 00:34:31.374 Could not set queue depth (nvme0n1) 00:34:31.374 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:31.374 fio-3.35 00:34:31.374 Starting 1 thread 00:34:32.817 00:34:32.817 job0: (groupid=0, jobs=1): err= 0: pid=3786971: Wed Nov 20 07:32:54 2024 00:34:32.817 read: IOPS=16, BW=66.1KiB/s (67.7kB/s)(68.0KiB/1028msec) 00:34:32.817 slat (nsec): min=10171, max=28981, avg=25907.00, stdev=4094.43 00:34:32.817 clat (usec): min=40969, max=42925, avg=41954.22, stdev=481.67 00:34:32.817 lat (usec): min=40995, max=42952, avg=41980.13, stdev=482.10 00:34:32.817 clat percentiles (usec): 00:34:32.817 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:32.817 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:32.817 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:34:32.817 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:32.817 | 99.99th=[42730] 00:34:32.817 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:34:32.817 slat (nsec): min=9332, max=70221, avg=29357.89, stdev=10529.03 00:34:32.817 clat (usec): min=273, max=833, avg=578.30, stdev=94.69 00:34:32.817 lat (usec): min=283, max=885, avg=607.65, stdev=100.59 00:34:32.817 clat percentiles (usec): 00:34:32.817 | 1.00th=[ 351], 5.00th=[ 404], 10.00th=[ 453], 20.00th=[ 498], 00:34:32.817 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 578], 60.00th=[ 611], 00:34:32.817 | 70.00th=[ 635], 80.00th=[ 660], 90.00th=[ 701], 95.00th=[ 725], 00:34:32.817 | 99.00th=[ 758], 99.50th=[ 791], 99.90th=[ 832], 99.95th=[ 832], 00:34:32.817 | 99.99th=[ 832] 00:34:32.817 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:32.817 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:32.817 lat (usec) : 500=19.66%, 750=75.80%, 1000=1.32% 00:34:32.817 lat (msec) : 50=3.21% 00:34:32.817 cpu : usr=1.17%, sys=1.75%, ctx=529, majf=0, minf=1 00:34:32.817 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:32.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.817 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.817 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:32.817 00:34:32.817 Run status group 0 (all jobs): 00:34:32.817 READ: bw=66.1KiB/s (67.7kB/s), 66.1KiB/s-66.1KiB/s (67.7kB/s-67.7kB/s), io=68.0KiB (69.6kB), run=1028-1028msec 00:34:32.817 WRITE: bw=1992KiB/s (2040kB/s), 1992KiB/s-1992KiB/s (2040kB/s-2040kB/s), io=2048KiB (2097kB), run=1028-1028msec 00:34:32.817 00:34:32.817 Disk stats (read/write): 00:34:32.817 nvme0n1: ios=63/512, merge=0/0, ticks=938/239, in_queue=1177, util=97.70% 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:32.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:32.817 rmmod nvme_tcp 00:34:32.817 rmmod nvme_fabrics 00:34:32.817 rmmod nvme_keyring 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3785842 ']' 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3785842 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3785842 ']' 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3785842 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3785842 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3785842' 00:34:32.817 killing process with pid 3785842 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3785842 00:34:32.817 07:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3785842 00:34:32.817 07:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:32.817 07:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:32.817 07:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:32.817 07:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:33.077 07:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:33.077 07:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:33.077 07:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:33.077 07:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:33.077 07:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:33.077 07:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.077 07:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:33.077 07:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.990 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:34.990 00:34:34.990 real 0m15.769s 00:34:34.990 user 0m36.856s 00:34:34.990 sys 0m7.402s 00:34:34.990 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:34.990 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:34.990 ************************************ 00:34:34.990 END TEST nvmf_nmic 00:34:34.990 ************************************ 00:34:34.990 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:34.990 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:34.990 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:34.990 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:34.990 ************************************ 00:34:34.990 START TEST nvmf_fio_target 00:34:34.990 ************************************ 00:34:34.990 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:35.253 * Looking for test storage... 00:34:35.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:35.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.253 --rc genhtml_branch_coverage=1 00:34:35.253 --rc genhtml_function_coverage=1 00:34:35.253 --rc genhtml_legend=1 00:34:35.253 --rc geninfo_all_blocks=1 00:34:35.253 --rc geninfo_unexecuted_blocks=1 00:34:35.253 00:34:35.253 ' 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:35.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.253 --rc genhtml_branch_coverage=1 00:34:35.253 --rc genhtml_function_coverage=1 00:34:35.253 --rc genhtml_legend=1 00:34:35.253 --rc geninfo_all_blocks=1 00:34:35.253 --rc geninfo_unexecuted_blocks=1 00:34:35.253 00:34:35.253 ' 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:35.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.253 --rc genhtml_branch_coverage=1 00:34:35.253 --rc genhtml_function_coverage=1 00:34:35.253 --rc genhtml_legend=1 00:34:35.253 --rc geninfo_all_blocks=1 00:34:35.253 --rc geninfo_unexecuted_blocks=1 00:34:35.253 00:34:35.253 ' 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:35.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.253 --rc genhtml_branch_coverage=1 00:34:35.253 --rc genhtml_function_coverage=1 00:34:35.253 --rc genhtml_legend=1 00:34:35.253 --rc geninfo_all_blocks=1 00:34:35.253 --rc geninfo_unexecuted_blocks=1 00:34:35.253 00:34:35.253 ' 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.253 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:35.254 07:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:43.392 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:43.392 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.392 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:43.393 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:43.393 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:43.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:43.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:34:43.393 00:34:43.393 --- 10.0.0.2 ping statistics --- 00:34:43.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.393 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:43.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:43.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:34:43.393 00:34:43.393 --- 10.0.0.1 ping statistics --- 00:34:43.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.393 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3791486 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3791486 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3791486 ']' 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:43.393 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.394 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:43.394 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:43.394 [2024-11-20 07:33:05.038359] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:43.394 [2024-11-20 07:33:05.039517] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:34:43.394 [2024-11-20 07:33:05.039573] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.394 [2024-11-20 07:33:05.113930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:43.394 [2024-11-20 07:33:05.163472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:43.394 [2024-11-20 07:33:05.163522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:43.394 [2024-11-20 07:33:05.163529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:43.394 [2024-11-20 07:33:05.163535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:43.394 [2024-11-20 07:33:05.163539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:43.394 [2024-11-20 07:33:05.167191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.394 [2024-11-20 07:33:05.167268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:43.394 [2024-11-20 07:33:05.167435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.394 [2024-11-20 07:33:05.167436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:43.394 [2024-11-20 07:33:05.239677] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:43.394 [2024-11-20 07:33:05.239981] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:43.394 [2024-11-20 07:33:05.240828] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:43.394 [2024-11-20 07:33:05.241235] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:43.394 [2024-11-20 07:33:05.241339] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:43.394 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:43.394 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:34:43.394 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:43.394 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:43.394 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:43.394 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.394 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:43.394 [2024-11-20 07:33:05.483922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.394 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:43.655 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:43.655 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:43.916 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:43.916 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:43.916 07:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:43.916 07:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:44.178 07:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:44.178 07:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:44.438 07:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:44.699 07:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:44.699 07:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:44.699 07:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:44.699 07:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:44.959 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:44.959 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:45.220 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:45.481 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:45.481 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:45.481 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:45.481 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:45.742 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:46.003 [2024-11-20 07:33:08.028242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:46.004 07:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:46.004 07:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:46.264 07:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:46.836 07:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:46.836 07:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:34:46.836 07:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:46.836 07:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:34:46.836 07:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:34:46.836 07:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:34:48.755 07:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:48.755 07:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:48.755 07:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:48.755 07:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:34:48.755 07:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:48.755 07:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:34:48.756 07:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:48.756 [global] 00:34:48.756 thread=1 00:34:48.756 invalidate=1 00:34:48.756 rw=write 00:34:48.756 time_based=1 00:34:48.756 runtime=1 00:34:48.756 ioengine=libaio 00:34:48.756 direct=1 00:34:48.756 bs=4096 00:34:48.756 iodepth=1 00:34:48.756 norandommap=0 00:34:48.756 numjobs=1 00:34:48.756 00:34:48.756 verify_dump=1 00:34:48.756 verify_backlog=512 00:34:48.756 verify_state_save=0 00:34:48.756 do_verify=1 00:34:48.756 verify=crc32c-intel 00:34:48.756 [job0] 00:34:48.756 filename=/dev/nvme0n1 00:34:48.756 [job1] 00:34:48.756 filename=/dev/nvme0n2 00:34:48.756 [job2] 00:34:48.756 filename=/dev/nvme0n3 00:34:48.756 [job3] 00:34:48.756 filename=/dev/nvme0n4 00:34:48.756 Could not set queue depth (nvme0n1) 00:34:48.756 Could not set queue depth (nvme0n2) 00:34:48.756 Could not set queue depth (nvme0n3) 00:34:48.756 Could not set queue depth (nvme0n4) 00:34:49.325 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:49.325 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:49.325 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:49.325 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:49.325 fio-3.35 00:34:49.325 Starting 4 threads 00:34:50.711 00:34:50.711 job0: (groupid=0, jobs=1): err= 0: pid=3793130: Wed Nov 20 07:33:12 2024 00:34:50.711 read: IOPS=277, BW=1110KiB/s (1137kB/s)(1128KiB/1016msec) 00:34:50.711 slat (nsec): min=7588, max=46643, avg=27808.96, stdev=2878.26 00:34:50.711 clat (usec): min=577, max=41995, avg=2421.80, stdev=7513.81 00:34:50.711 lat (usec): min=605, max=42023, avg=2449.61, stdev=7513.45 00:34:50.711 clat percentiles (usec): 00:34:50.711 | 1.00th=[ 685], 5.00th=[ 832], 10.00th=[ 873], 20.00th=[ 922], 00:34:50.711 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 988], 60.00th=[ 1004], 00:34:50.711 | 70.00th=[ 1029], 80.00th=[ 1074], 90.00th=[ 1139], 95.00th=[ 1205], 00:34:50.711 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:50.711 | 99.99th=[42206] 00:34:50.711 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:34:50.711 slat (nsec): min=9146, max=55058, avg=30498.65, stdev=11701.03 00:34:50.711 clat (usec): min=209, max=4283, avg=592.80, stdev=218.97 00:34:50.711 lat (usec): min=220, max=4319, avg=623.30, stdev=221.91 00:34:50.711 clat percentiles (usec): 00:34:50.711 | 1.00th=[ 281], 5.00th=[ 343], 10.00th=[ 383], 20.00th=[ 453], 00:34:50.711 | 30.00th=[ 498], 40.00th=[ 545], 50.00th=[ 586], 60.00th=[ 635], 00:34:50.711 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 816], 00:34:50.711 | 99.00th=[ 914], 99.50th=[ 922], 99.90th=[ 4293], 99.95th=[ 4293], 00:34:50.711 | 99.99th=[ 4293] 00:34:50.711 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:34:50.711 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:50.711 lat (usec) : 250=0.38%, 500=19.14%, 750=37.28%, 1000=29.09% 00:34:50.711 lat (msec) : 2=12.72%, 10=0.13%, 50=1.26% 00:34:50.711 cpu : usr=1.77%, sys=2.86%, ctx=796, majf=0, minf=1 00:34:50.711 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.712 issued rwts: total=282,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:50.712 job1: (groupid=0, jobs=1): err= 0: pid=3793149: Wed Nov 20 07:33:12 2024 00:34:50.712 read: IOPS=15, BW=63.1KiB/s (64.6kB/s)(64.0KiB/1014msec) 00:34:50.712 slat (nsec): min=26257, max=27399, avg=26670.94, stdev=259.29 00:34:50.712 clat (usec): min=40963, max=42097, avg=41592.81, stdev=464.97 00:34:50.712 lat (usec): min=40990, max=42124, avg=41619.49, stdev=465.01 00:34:50.712 clat percentiles (usec): 00:34:50.712 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:50.712 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:50.712 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:50.712 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:50.712 | 99.99th=[42206] 00:34:50.712 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:34:50.712 slat (usec): min=10, max=11677, avg=55.81, stdev=514.68 00:34:50.712 clat (usec): min=191, max=2190, avg=609.97, stdev=197.69 00:34:50.712 lat (usec): min=225, max=12185, avg=665.79, stdev=547.37 00:34:50.712 clat percentiles (usec): 00:34:50.712 | 1.00th=[ 277], 5.00th=[ 363], 10.00th=[ 416], 20.00th=[ 474], 00:34:50.712 | 30.00th=[ 519], 40.00th=[ 562], 50.00th=[ 603], 60.00th=[ 644], 00:34:50.712 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 783], 95.00th=[ 857], 00:34:50.712 | 99.00th=[ 1418], 99.50th=[ 2024], 99.90th=[ 2180], 99.95th=[ 2180], 00:34:50.712 | 99.99th=[ 2180] 00:34:50.712 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:34:50.712 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:50.712 lat (usec) : 250=0.57%, 500=24.24%, 750=59.28%, 1000=11.17% 00:34:50.712 lat (msec) : 2=1.14%, 4=0.57%, 50=3.03% 00:34:50.712 cpu : usr=0.79%, sys=1.58%, ctx=530, majf=0, minf=1 00:34:50.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.712 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:50.712 job2: (groupid=0, jobs=1): err= 0: pid=3793181: Wed Nov 20 07:33:12 2024 00:34:50.712 read: IOPS=263, BW=1054KiB/s (1079kB/s)(1072KiB/1017msec) 00:34:50.712 slat (nsec): min=7317, max=62737, avg=27691.18, stdev=3982.47 00:34:50.712 clat (usec): min=698, max=42060, avg=2525.43, stdev=7760.08 00:34:50.712 lat (usec): min=726, max=42088, avg=2553.12, stdev=7759.68 00:34:50.712 clat percentiles (usec): 00:34:50.712 | 1.00th=[ 775], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 947], 00:34:50.712 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:34:50.712 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1172], 00:34:50.712 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:50.712 | 99.99th=[42206] 00:34:50.712 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:34:50.712 slat (nsec): min=9443, max=54764, avg=31661.46, stdev=9158.91 00:34:50.712 clat (usec): min=193, max=1053, avg=606.34, stdev=157.65 00:34:50.712 lat (usec): min=213, max=1088, avg=638.00, stdev=160.41 00:34:50.712 clat percentiles (usec): 00:34:50.712 | 1.00th=[ 243], 5.00th=[ 330], 10.00th=[ 383], 20.00th=[ 469], 00:34:50.712 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 652], 00:34:50.712 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 799], 95.00th=[ 857], 00:34:50.712 | 99.00th=[ 988], 99.50th=[ 1020], 99.90th=[ 1057], 99.95th=[ 1057], 00:34:50.712 | 99.99th=[ 1057] 00:34:50.712 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:34:50.712 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:50.712 lat (usec) : 250=0.77%, 500=15.00%, 750=38.97%, 1000=24.87% 00:34:50.712 lat (msec) : 2=19.10%, 50=1.28% 00:34:50.712 cpu : usr=1.57%, sys=3.15%, ctx=780, majf=0, minf=2 00:34:50.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.712 issued rwts: total=268,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:50.712 job3: (groupid=0, jobs=1): err= 0: pid=3793198: Wed Nov 20 07:33:12 2024 00:34:50.712 read: IOPS=18, BW=73.9KiB/s (75.6kB/s)(76.0KiB/1029msec) 00:34:50.712 slat (nsec): min=27340, max=27826, avg=27639.84, stdev=121.10 00:34:50.712 clat (usec): min=40854, max=41843, avg=41064.29, stdev=266.58 00:34:50.712 lat (usec): min=40882, max=41871, avg=41091.93, stdev=266.61 00:34:50.712 clat percentiles (usec): 00:34:50.712 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:50.712 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:50.712 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:34:50.712 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:50.712 | 99.99th=[41681] 00:34:50.712 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:34:50.712 slat (usec): min=10, max=284, avg=30.21, stdev=16.31 00:34:50.712 clat (usec): min=157, max=2841, avg=442.01, stdev=133.30 00:34:50.712 lat (usec): min=192, max=2851, avg=472.22, stdev=135.95 00:34:50.712 clat percentiles (usec): 00:34:50.712 | 1.00th=[ 273], 5.00th=[ 302], 10.00th=[ 330], 20.00th=[ 359], 00:34:50.712 | 30.00th=[ 388], 40.00th=[ 424], 50.00th=[ 453], 60.00th=[ 469], 00:34:50.712 | 70.00th=[ 486], 80.00th=[ 506], 90.00th=[ 537], 95.00th=[ 562], 00:34:50.712 | 99.00th=[ 594], 99.50th=[ 635], 99.90th=[ 2835], 99.95th=[ 2835], 00:34:50.712 | 99.99th=[ 2835] 00:34:50.712 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:34:50.712 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:50.712 lat (usec) : 250=0.38%, 500=74.39%, 750=21.47% 00:34:50.712 lat (msec) : 4=0.19%, 50=3.58% 00:34:50.712 cpu : usr=0.78%, sys=1.36%, ctx=534, majf=0, minf=1 00:34:50.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.712 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:50.712 00:34:50.712 Run status group 0 (all jobs): 00:34:50.712 READ: bw=2274KiB/s (2329kB/s), 63.1KiB/s-1110KiB/s (64.6kB/s-1137kB/s), io=2340KiB (2396kB), run=1014-1029msec 00:34:50.712 WRITE: bw=7961KiB/s (8152kB/s), 1990KiB/s-2020KiB/s (2038kB/s-2068kB/s), io=8192KiB (8389kB), run=1014-1029msec 00:34:50.712 00:34:50.712 Disk stats (read/write): 00:34:50.712 nvme0n1: ios=314/512, merge=0/0, ticks=978/253, in_queue=1231, util=96.69% 00:34:50.712 nvme0n2: ios=63/512, merge=0/0, ticks=877/292, in_queue=1169, util=96.42% 00:34:50.712 nvme0n3: ios=248/512, merge=0/0, ticks=473/247, in_queue=720, util=88.36% 00:34:50.712 nvme0n4: ios=36/512, merge=0/0, ticks=1495/214, in_queue=1709, util=96.57% 00:34:50.712 07:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:50.712 [global] 00:34:50.712 thread=1 00:34:50.712 invalidate=1 00:34:50.712 rw=randwrite 00:34:50.712 time_based=1 00:34:50.712 runtime=1 00:34:50.712 ioengine=libaio 00:34:50.712 direct=1 00:34:50.712 bs=4096 00:34:50.712 iodepth=1 00:34:50.712 norandommap=0 00:34:50.712 numjobs=1 00:34:50.712 00:34:50.712 verify_dump=1 00:34:50.712 verify_backlog=512 00:34:50.712 verify_state_save=0 00:34:50.712 do_verify=1 00:34:50.712 verify=crc32c-intel 00:34:50.712 [job0] 00:34:50.712 filename=/dev/nvme0n1 00:34:50.712 [job1] 00:34:50.712 filename=/dev/nvme0n2 00:34:50.712 [job2] 00:34:50.712 filename=/dev/nvme0n3 00:34:50.712 [job3] 00:34:50.712 filename=/dev/nvme0n4 00:34:50.712 Could not set queue depth (nvme0n1) 00:34:50.712 Could not set queue depth (nvme0n2) 00:34:50.712 Could not set queue depth (nvme0n3) 00:34:50.712 Could not set queue depth (nvme0n4) 00:34:50.973 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:50.974 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:50.974 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:50.974 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:50.974 fio-3.35 00:34:50.974 Starting 4 threads 00:34:52.376 00:34:52.376 job0: (groupid=0, jobs=1): err= 0: pid=3793744: Wed Nov 20 07:33:14 2024 00:34:52.376 read: IOPS=911, BW=3644KiB/s (3732kB/s)(3648KiB/1001msec) 00:34:52.376 slat (nsec): min=6801, max=54672, avg=22619.33, stdev=8485.31 00:34:52.376 clat (usec): min=165, max=782, avg=557.83, stdev=87.12 00:34:52.376 lat (usec): min=192, max=790, avg=580.45, stdev=88.49 00:34:52.376 clat percentiles (usec): 00:34:52.376 | 1.00th=[ 260], 5.00th=[ 392], 10.00th=[ 457], 20.00th=[ 502], 00:34:52.376 | 30.00th=[ 537], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 586], 00:34:52.376 | 70.00th=[ 603], 80.00th=[ 619], 90.00th=[ 644], 95.00th=[ 668], 00:34:52.376 | 99.00th=[ 709], 99.50th=[ 725], 99.90th=[ 783], 99.95th=[ 783], 00:34:52.376 | 99.99th=[ 783] 00:34:52.376 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:52.376 slat (nsec): min=9128, max=61623, avg=28928.16, stdev=10202.33 00:34:52.376 clat (usec): min=106, max=926, avg=415.88, stdev=151.53 00:34:52.376 lat (usec): min=117, max=960, avg=444.81, stdev=152.07 00:34:52.376 clat percentiles (usec): 00:34:52.376 | 1.00th=[ 139], 5.00th=[ 241], 10.00th=[ 269], 20.00th=[ 297], 00:34:52.376 | 30.00th=[ 338], 40.00th=[ 363], 50.00th=[ 379], 60.00th=[ 404], 00:34:52.376 | 70.00th=[ 437], 80.00th=[ 515], 90.00th=[ 660], 95.00th=[ 742], 00:34:52.376 | 99.00th=[ 840], 99.50th=[ 865], 99.90th=[ 922], 99.95th=[ 930], 00:34:52.376 | 99.99th=[ 930] 00:34:52.376 bw ( KiB/s): min= 4096, max= 4096, per=33.89%, avg=4096.00, stdev= 0.00, samples=1 00:34:52.376 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:52.376 lat (usec) : 250=3.98%, 500=46.80%, 750=46.75%, 1000=2.48% 00:34:52.376 cpu : usr=2.10%, sys=6.20%, ctx=1938, majf=0, minf=1 00:34:52.376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.376 issued rwts: total=912,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.376 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.376 job1: (groupid=0, jobs=1): err= 0: pid=3793751: Wed Nov 20 07:33:14 2024 00:34:52.376 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:52.376 slat (nsec): min=26552, max=47301, avg=27893.02, stdev=2973.03 00:34:52.376 clat (usec): min=776, max=1332, avg=1040.18, stdev=86.30 00:34:52.376 lat (usec): min=803, max=1359, avg=1068.07, stdev=85.96 00:34:52.376 clat percentiles (usec): 00:34:52.376 | 1.00th=[ 832], 5.00th=[ 889], 10.00th=[ 930], 20.00th=[ 979], 00:34:52.376 | 30.00th=[ 1004], 40.00th=[ 1020], 50.00th=[ 1045], 60.00th=[ 1057], 00:34:52.376 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:34:52.376 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1336], 99.95th=[ 1336], 00:34:52.376 | 99.99th=[ 1336] 00:34:52.376 write: IOPS=676, BW=2705KiB/s (2770kB/s)(2708KiB/1001msec); 0 zone resets 00:34:52.376 slat (nsec): min=9106, max=67192, avg=30352.95, stdev=9673.89 00:34:52.376 clat (usec): min=253, max=4140, avg=623.31, stdev=190.71 00:34:52.376 lat (usec): min=263, max=4181, avg=653.66, stdev=193.43 00:34:52.376 clat percentiles (usec): 00:34:52.376 | 1.00th=[ 351], 5.00th=[ 392], 10.00th=[ 453], 20.00th=[ 515], 00:34:52.376 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:34:52.376 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 807], 00:34:52.376 | 99.00th=[ 881], 99.50th=[ 922], 99.90th=[ 4146], 99.95th=[ 4146], 00:34:52.376 | 99.99th=[ 4146] 00:34:52.376 bw ( KiB/s): min= 4096, max= 4096, per=33.89%, avg=4096.00, stdev= 0.00, samples=1 00:34:52.376 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:52.376 lat (usec) : 500=10.09%, 750=39.28%, 1000=19.26% 00:34:52.376 lat (msec) : 2=31.20%, 4=0.08%, 10=0.08% 00:34:52.376 cpu : usr=1.20%, sys=5.90%, ctx=1191, majf=0, minf=1 00:34:52.376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.376 issued rwts: total=512,677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.376 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.376 job2: (groupid=0, jobs=1): err= 0: pid=3793756: Wed Nov 20 07:33:14 2024 00:34:52.376 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:52.376 slat (nsec): min=25911, max=44952, avg=27441.85, stdev=2965.33 00:34:52.376 clat (usec): min=777, max=1364, avg=1096.11, stdev=102.56 00:34:52.376 lat (usec): min=805, max=1390, avg=1123.55, stdev=102.47 00:34:52.376 clat percentiles (usec): 00:34:52.376 | 1.00th=[ 816], 5.00th=[ 898], 10.00th=[ 955], 20.00th=[ 1020], 00:34:52.376 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1139], 00:34:52.376 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1237], 00:34:52.376 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1369], 99.95th=[ 1369], 00:34:52.376 | 99.99th=[ 1369] 00:34:52.376 write: IOPS=617, BW=2470KiB/s (2529kB/s)(2472KiB/1001msec); 0 zone resets 00:34:52.376 slat (nsec): min=9552, max=55735, avg=29786.59, stdev=9749.92 00:34:52.376 clat (usec): min=275, max=935, avg=640.21, stdev=121.00 00:34:52.376 lat (usec): min=295, max=970, avg=670.00, stdev=124.63 00:34:52.376 clat percentiles (usec): 00:34:52.376 | 1.00th=[ 363], 5.00th=[ 416], 10.00th=[ 478], 20.00th=[ 529], 00:34:52.377 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 693], 00:34:52.377 | 70.00th=[ 725], 80.00th=[ 750], 90.00th=[ 775], 95.00th=[ 816], 00:34:52.377 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 938], 99.95th=[ 938], 00:34:52.377 | 99.99th=[ 938] 00:34:52.377 bw ( KiB/s): min= 4096, max= 4096, per=33.89%, avg=4096.00, stdev= 0.00, samples=1 00:34:52.377 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:52.377 lat (usec) : 500=7.96%, 750=36.73%, 1000=17.70% 00:34:52.377 lat (msec) : 2=37.61% 00:34:52.377 cpu : usr=1.90%, sys=3.20%, ctx=1131, majf=0, minf=1 00:34:52.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.377 issued rwts: total=512,618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.377 job3: (groupid=0, jobs=1): err= 0: pid=3793762: Wed Nov 20 07:33:14 2024 00:34:52.377 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:52.377 slat (nsec): min=8164, max=58512, avg=26982.99, stdev=2766.94 00:34:52.377 clat (usec): min=678, max=1347, avg=1031.72, stdev=86.67 00:34:52.377 lat (usec): min=704, max=1374, avg=1058.70, stdev=86.57 00:34:52.377 clat percentiles (usec): 00:34:52.377 | 1.00th=[ 791], 5.00th=[ 873], 10.00th=[ 914], 20.00th=[ 971], 00:34:52.377 | 30.00th=[ 1004], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:34:52.377 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1172], 00:34:52.377 | 99.00th=[ 1237], 99.50th=[ 1237], 99.90th=[ 1352], 99.95th=[ 1352], 00:34:52.377 | 99.99th=[ 1352] 00:34:52.377 write: IOPS=705, BW=2821KiB/s (2889kB/s)(2824KiB/1001msec); 0 zone resets 00:34:52.377 slat (nsec): min=9021, max=51553, avg=30143.99, stdev=8576.31 00:34:52.377 clat (usec): min=179, max=982, avg=604.49, stdev=131.11 00:34:52.377 lat (usec): min=212, max=1029, avg=634.63, stdev=133.92 00:34:52.377 clat percentiles (usec): 00:34:52.377 | 1.00th=[ 285], 5.00th=[ 379], 10.00th=[ 433], 20.00th=[ 494], 00:34:52.377 | 30.00th=[ 529], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 644], 00:34:52.377 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 816], 00:34:52.377 | 99.00th=[ 889], 99.50th=[ 930], 99.90th=[ 979], 99.95th=[ 979], 00:34:52.377 | 99.99th=[ 979] 00:34:52.377 bw ( KiB/s): min= 4096, max= 4096, per=33.89%, avg=4096.00, stdev= 0.00, samples=1 00:34:52.377 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:52.377 lat (usec) : 250=0.16%, 500=13.38%, 750=37.03%, 1000=19.21% 00:34:52.377 lat (msec) : 2=30.21% 00:34:52.377 cpu : usr=2.70%, sys=4.70%, ctx=1218, majf=0, minf=2 00:34:52.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.377 issued rwts: total=512,706,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.377 00:34:52.377 Run status group 0 (all jobs): 00:34:52.377 READ: bw=9782KiB/s (10.0MB/s), 2046KiB/s-3644KiB/s (2095kB/s-3732kB/s), io=9792KiB (10.0MB), run=1001-1001msec 00:34:52.377 WRITE: bw=11.8MiB/s (12.4MB/s), 2470KiB/s-4092KiB/s (2529kB/s-4190kB/s), io=11.8MiB (12.4MB), run=1001-1001msec 00:34:52.377 00:34:52.377 Disk stats (read/write): 00:34:52.377 nvme0n1: ios=667/1024, merge=0/0, ticks=1306/385, in_queue=1691, util=96.69% 00:34:52.377 nvme0n2: ios=487/512, merge=0/0, ticks=1136/266, in_queue=1402, util=97.35% 00:34:52.377 nvme0n3: ios=467/512, merge=0/0, ticks=1184/319, in_queue=1503, util=97.89% 00:34:52.377 nvme0n4: ios=471/512, merge=0/0, ticks=428/245, in_queue=673, util=89.54% 00:34:52.377 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:52.377 [global] 00:34:52.377 thread=1 00:34:52.377 invalidate=1 00:34:52.377 rw=write 00:34:52.377 time_based=1 00:34:52.377 runtime=1 00:34:52.377 ioengine=libaio 00:34:52.377 direct=1 00:34:52.377 bs=4096 00:34:52.377 iodepth=128 00:34:52.377 norandommap=0 00:34:52.377 numjobs=1 00:34:52.377 00:34:52.377 verify_dump=1 00:34:52.377 verify_backlog=512 00:34:52.377 verify_state_save=0 00:34:52.377 do_verify=1 00:34:52.377 verify=crc32c-intel 00:34:52.377 [job0] 00:34:52.377 filename=/dev/nvme0n1 00:34:52.377 [job1] 00:34:52.377 filename=/dev/nvme0n2 00:34:52.377 [job2] 00:34:52.377 filename=/dev/nvme0n3 00:34:52.377 [job3] 00:34:52.377 filename=/dev/nvme0n4 00:34:52.377 Could not set queue depth (nvme0n1) 00:34:52.377 Could not set queue depth (nvme0n2) 00:34:52.377 Could not set queue depth (nvme0n3) 00:34:52.377 Could not set queue depth (nvme0n4) 00:34:52.635 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:52.635 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:52.635 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:52.635 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:52.635 fio-3.35 00:34:52.635 Starting 4 threads 00:34:54.043 00:34:54.043 job0: (groupid=0, jobs=1): err= 0: pid=3794253: Wed Nov 20 07:33:15 2024 00:34:54.043 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:34:54.043 slat (nsec): min=912, max=8546.4k, avg=63989.40, stdev=475736.16 00:34:54.043 clat (usec): min=2959, max=35902, avg=8838.50, stdev=3619.08 00:34:54.043 lat (usec): min=2962, max=35908, avg=8902.49, stdev=3652.50 00:34:54.043 clat percentiles (usec): 00:34:54.043 | 1.00th=[ 3589], 5.00th=[ 4686], 10.00th=[ 5276], 20.00th=[ 6587], 00:34:54.043 | 30.00th=[ 7242], 40.00th=[ 7767], 50.00th=[ 8160], 60.00th=[ 8586], 00:34:54.043 | 70.00th=[ 9241], 80.00th=[10552], 90.00th=[13042], 95.00th=[15401], 00:34:54.043 | 99.00th=[21890], 99.50th=[29492], 99.90th=[35390], 99.95th=[35914], 00:34:54.043 | 99.99th=[35914] 00:34:54.043 write: IOPS=6974, BW=27.2MiB/s (28.6MB/s)(27.3MiB/1003msec); 0 zone resets 00:34:54.043 slat (nsec): min=1664, max=7870.1k, avg=68383.96, stdev=427617.81 00:34:54.043 clat (usec): min=778, max=35887, avg=9715.18, stdev=6292.17 00:34:54.043 lat (usec): min=1365, max=35890, avg=9783.56, stdev=6333.65 00:34:54.043 clat percentiles (usec): 00:34:54.043 | 1.00th=[ 3195], 5.00th=[ 3556], 10.00th=[ 4228], 20.00th=[ 5407], 00:34:54.043 | 30.00th=[ 6128], 40.00th=[ 6783], 50.00th=[ 7701], 60.00th=[ 8455], 00:34:54.043 | 70.00th=[ 9896], 80.00th=[14222], 90.00th=[17433], 95.00th=[25822], 00:34:54.043 | 99.00th=[30802], 99.50th=[31851], 99.90th=[32637], 99.95th=[32637], 00:34:54.043 | 99.99th=[35914] 00:34:54.043 bw ( KiB/s): min=20840, max=34096, per=27.56%, avg=27468.00, stdev=9373.41, samples=2 00:34:54.043 iops : min= 5210, max= 8524, avg=6867.00, stdev=2343.35, samples=2 00:34:54.043 lat (usec) : 1000=0.01% 00:34:54.043 lat (msec) : 2=0.10%, 4=5.06%, 10=67.97%, 20=21.70%, 50=5.16% 00:34:54.043 cpu : usr=5.49%, sys=6.39%, ctx=452, majf=0, minf=1 00:34:54.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:54.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:54.043 issued rwts: total=6656,6995,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:54.043 job1: (groupid=0, jobs=1): err= 0: pid=3794254: Wed Nov 20 07:33:15 2024 00:34:54.043 read: IOPS=7880, BW=30.8MiB/s (32.3MB/s)(31.0MiB/1006msec) 00:34:54.043 slat (nsec): min=943, max=10151k, avg=63332.79, stdev=508052.64 00:34:54.043 clat (usec): min=1645, max=20401, avg=8154.93, stdev=2272.28 00:34:54.043 lat (usec): min=2654, max=20406, avg=8218.26, stdev=2309.31 00:34:54.043 clat percentiles (usec): 00:34:54.043 | 1.00th=[ 4686], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6587], 00:34:54.043 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7373], 60.00th=[ 7832], 00:34:54.043 | 70.00th=[ 8455], 80.00th=[ 9503], 90.00th=[11469], 95.00th=[13042], 00:34:54.043 | 99.00th=[14484], 99.50th=[16909], 99.90th=[20317], 99.95th=[20317], 00:34:54.043 | 99.99th=[20317] 00:34:54.043 write: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec); 0 zone resets 00:34:54.043 slat (nsec): min=1574, max=7192.7k, avg=56449.28, stdev=413550.29 00:34:54.043 clat (usec): min=681, max=20325, avg=7676.54, stdev=2968.37 00:34:54.043 lat (usec): min=1409, max=20334, avg=7732.99, stdev=2987.82 00:34:54.043 clat percentiles (usec): 00:34:54.043 | 1.00th=[ 3261], 5.00th=[ 4228], 10.00th=[ 4490], 20.00th=[ 5669], 00:34:54.043 | 30.00th=[ 6456], 40.00th=[ 6915], 50.00th=[ 7177], 60.00th=[ 7373], 00:34:54.043 | 70.00th=[ 7832], 80.00th=[ 9110], 90.00th=[11207], 95.00th=[14746], 00:34:54.043 | 99.00th=[17695], 99.50th=[18744], 99.90th=[19792], 99.95th=[19792], 00:34:54.043 | 99.99th=[20317] 00:34:54.043 bw ( KiB/s): min=30848, max=34688, per=32.88%, avg=32768.00, stdev=2715.29, samples=2 00:34:54.043 iops : min= 7712, max= 8672, avg=8192.00, stdev=678.82, samples=2 00:34:54.043 lat (usec) : 750=0.01% 00:34:54.043 lat (msec) : 2=0.06%, 4=1.46%, 10=82.80%, 20=15.60%, 50=0.07% 00:34:54.043 cpu : usr=5.17%, sys=7.46%, ctx=503, majf=0, minf=2 00:34:54.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:54.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:54.043 issued rwts: total=7928,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:54.043 job2: (groupid=0, jobs=1): err= 0: pid=3794259: Wed Nov 20 07:33:15 2024 00:34:54.043 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:34:54.043 slat (nsec): min=961, max=11614k, avg=119389.53, stdev=751159.11 00:34:54.043 clat (usec): min=6375, max=47220, avg=15590.11, stdev=8284.70 00:34:54.043 lat (usec): min=6378, max=51144, avg=15709.50, stdev=8345.72 00:34:54.043 clat percentiles (usec): 00:34:54.043 | 1.00th=[ 7242], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10683], 00:34:54.043 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11600], 60.00th=[12387], 00:34:54.043 | 70.00th=[14484], 80.00th=[21103], 90.00th=[29230], 95.00th=[34341], 00:34:54.043 | 99.00th=[44827], 99.50th=[44827], 99.90th=[47449], 99.95th=[47449], 00:34:54.043 | 99.99th=[47449] 00:34:54.043 write: IOPS=3557, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:34:54.043 slat (nsec): min=1649, max=14986k, avg=171977.96, stdev=800115.65 00:34:54.043 clat (usec): min=4263, max=75843, avg=21978.32, stdev=14540.43 00:34:54.043 lat (usec): min=4951, max=77725, avg=22150.30, stdev=14628.68 00:34:54.043 clat percentiles (usec): 00:34:54.043 | 1.00th=[ 6521], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10028], 00:34:54.043 | 30.00th=[13304], 40.00th=[15270], 50.00th=[16057], 60.00th=[20317], 00:34:54.043 | 70.00th=[24773], 80.00th=[31589], 90.00th=[40109], 95.00th=[57410], 00:34:54.043 | 99.00th=[74974], 99.50th=[74974], 99.90th=[76022], 99.95th=[76022], 00:34:54.043 | 99.99th=[76022] 00:34:54.043 bw ( KiB/s): min=10488, max=17096, per=13.84%, avg=13792.00, stdev=4672.56, samples=2 00:34:54.043 iops : min= 2622, max= 4274, avg=3448.00, stdev=1168.14, samples=2 00:34:54.043 lat (msec) : 10=15.19%, 20=52.29%, 50=29.16%, 100=3.35% 00:34:54.043 cpu : usr=1.69%, sys=2.99%, ctx=473, majf=0, minf=1 00:34:54.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:34:54.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:54.043 issued rwts: total=3072,3575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:54.043 job3: (groupid=0, jobs=1): err= 0: pid=3794262: Wed Nov 20 07:33:15 2024 00:34:54.043 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:34:54.043 slat (nsec): min=1037, max=8646.6k, avg=67540.17, stdev=541333.11 00:34:54.043 clat (usec): min=3329, max=17854, avg=8675.40, stdev=2278.51 00:34:54.043 lat (usec): min=3645, max=19216, avg=8742.94, stdev=2313.12 00:34:54.043 clat percentiles (usec): 00:34:54.043 | 1.00th=[ 4555], 5.00th=[ 5866], 10.00th=[ 6325], 20.00th=[ 6980], 00:34:54.043 | 30.00th=[ 7308], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8717], 00:34:54.043 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[12256], 95.00th=[12911], 00:34:54.043 | 99.00th=[16057], 99.50th=[16188], 99.90th=[17171], 99.95th=[17171], 00:34:54.043 | 99.99th=[17957] 00:34:54.043 write: IOPS=6302, BW=24.6MiB/s (25.8MB/s)(24.8MiB/1008msec); 0 zone resets 00:34:54.043 slat (nsec): min=1717, max=40841k, avg=86163.60, stdev=1202746.54 00:34:54.043 clat (usec): min=1378, max=183423, avg=9568.42, stdev=12808.89 00:34:54.044 lat (usec): min=1408, max=183436, avg=9654.59, stdev=12999.95 00:34:54.044 clat percentiles (msec): 00:34:54.044 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:34:54.044 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 8], 00:34:54.044 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 15], 00:34:54.044 | 99.00th=[ 83], 99.50th=[ 124], 99.90th=[ 184], 99.95th=[ 184], 00:34:54.044 | 99.99th=[ 184] 00:34:54.044 bw ( KiB/s): min=20480, max=29328, per=24.99%, avg=24904.00, stdev=6256.48, samples=2 00:34:54.044 iops : min= 5120, max= 7332, avg=6226.00, stdev=1564.12, samples=2 00:34:54.044 lat (msec) : 2=0.02%, 4=0.77%, 10=79.34%, 20=17.84%, 50=1.53% 00:34:54.044 lat (msec) : 100=0.26%, 250=0.26% 00:34:54.044 cpu : usr=4.57%, sys=6.26%, ctx=300, majf=0, minf=2 00:34:54.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:54.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:54.044 issued rwts: total=6144,6353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:54.044 00:34:54.044 Run status group 0 (all jobs): 00:34:54.044 READ: bw=92.2MiB/s (96.7MB/s), 11.9MiB/s-30.8MiB/s (12.5MB/s-32.3MB/s), io=93.0MiB (97.5MB), run=1003-1008msec 00:34:54.044 WRITE: bw=97.3MiB/s (102MB/s), 13.9MiB/s-31.8MiB/s (14.6MB/s-33.4MB/s), io=98.1MiB (103MB), run=1003-1008msec 00:34:54.044 00:34:54.044 Disk stats (read/write): 00:34:54.044 nvme0n1: ios=5684/5831, merge=0/0, ticks=48552/52455, in_queue=101007, util=96.39% 00:34:54.044 nvme0n2: ios=6637/6656, merge=0/0, ticks=51530/49417, in_queue=100947, util=91.13% 00:34:54.044 nvme0n3: ios=2313/2560, merge=0/0, ticks=18642/29652, in_queue=48294, util=96.21% 00:34:54.044 nvme0n4: ios=4993/5120, merge=0/0, ticks=41509/34979, in_queue=76488, util=97.87% 00:34:54.044 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:54.044 [global] 00:34:54.044 thread=1 00:34:54.044 invalidate=1 00:34:54.044 rw=randwrite 00:34:54.044 time_based=1 00:34:54.044 runtime=1 00:34:54.044 ioengine=libaio 00:34:54.044 direct=1 00:34:54.044 bs=4096 00:34:54.044 iodepth=128 00:34:54.044 norandommap=0 00:34:54.044 numjobs=1 00:34:54.044 00:34:54.044 verify_dump=1 00:34:54.044 verify_backlog=512 00:34:54.044 verify_state_save=0 00:34:54.044 do_verify=1 00:34:54.044 verify=crc32c-intel 00:34:54.044 [job0] 00:34:54.044 filename=/dev/nvme0n1 00:34:54.044 [job1] 00:34:54.044 filename=/dev/nvme0n2 00:34:54.044 [job2] 00:34:54.044 filename=/dev/nvme0n3 00:34:54.044 [job3] 00:34:54.044 filename=/dev/nvme0n4 00:34:54.044 Could not set queue depth (nvme0n1) 00:34:54.044 Could not set queue depth (nvme0n2) 00:34:54.044 Could not set queue depth (nvme0n3) 00:34:54.044 Could not set queue depth (nvme0n4) 00:34:54.304 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:54.304 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:54.304 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:54.304 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:54.304 fio-3.35 00:34:54.304 Starting 4 threads 00:34:55.717 00:34:55.717 job0: (groupid=0, jobs=1): err= 0: pid=3794773: Wed Nov 20 07:33:17 2024 00:34:55.718 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:34:55.718 slat (nsec): min=955, max=15592k, avg=96832.19, stdev=788697.77 00:34:55.718 clat (usec): min=3465, max=69914, avg=13446.11, stdev=10631.23 00:34:55.718 lat (usec): min=3474, max=79899, avg=13542.95, stdev=10723.21 00:34:55.718 clat percentiles (usec): 00:34:55.718 | 1.00th=[ 4424], 5.00th=[ 5932], 10.00th=[ 6783], 20.00th=[ 7308], 00:34:55.718 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8979], 60.00th=[10028], 00:34:55.718 | 70.00th=[12125], 80.00th=[13960], 90.00th=[32113], 95.00th=[39584], 00:34:55.718 | 99.00th=[50594], 99.50th=[54264], 99.90th=[67634], 99.95th=[67634], 00:34:55.718 | 99.99th=[69731] 00:34:55.718 write: IOPS=5095, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:34:55.718 slat (nsec): min=1557, max=16427k, avg=99429.29, stdev=746522.12 00:34:55.718 clat (usec): min=632, max=81857, avg=12723.06, stdev=13036.36 00:34:55.718 lat (usec): min=1249, max=81865, avg=12822.49, stdev=13136.90 00:34:55.718 clat percentiles (usec): 00:34:55.718 | 1.00th=[ 3294], 5.00th=[ 4555], 10.00th=[ 5014], 20.00th=[ 6194], 00:34:55.718 | 30.00th=[ 7373], 40.00th=[ 7832], 50.00th=[ 8160], 60.00th=[ 8717], 00:34:55.718 | 70.00th=[ 9765], 80.00th=[13435], 90.00th=[30540], 95.00th=[38011], 00:34:55.718 | 99.00th=[77071], 99.50th=[81265], 99.90th=[81265], 99.95th=[82314], 00:34:55.718 | 99.99th=[82314] 00:34:55.718 bw ( KiB/s): min=11192, max=28672, per=19.14%, avg=19932.00, stdev=12360.23, samples=2 00:34:55.718 iops : min= 2798, max= 7168, avg=4983.00, stdev=3090.06, samples=2 00:34:55.718 lat (usec) : 750=0.01% 00:34:55.718 lat (msec) : 2=0.17%, 4=1.30%, 10=63.68%, 20=19.00%, 50=14.04% 00:34:55.718 lat (msec) : 100=1.79% 00:34:55.718 cpu : usr=2.30%, sys=5.89%, ctx=361, majf=0, minf=1 00:34:55.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:55.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:55.718 issued rwts: total=4608,5111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:55.718 job1: (groupid=0, jobs=1): err= 0: pid=3794774: Wed Nov 20 07:33:17 2024 00:34:55.718 read: IOPS=8414, BW=32.9MiB/s (34.5MB/s)(33.0MiB/1005msec) 00:34:55.718 slat (nsec): min=908, max=11080k, avg=60907.82, stdev=484880.63 00:34:55.718 clat (usec): min=2065, max=23572, avg=7993.09, stdev=2714.63 00:34:55.718 lat (usec): min=2070, max=23577, avg=8053.99, stdev=2736.48 00:34:55.718 clat percentiles (usec): 00:34:55.718 | 1.00th=[ 3654], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6259], 00:34:55.718 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7177], 60.00th=[ 7635], 00:34:55.718 | 70.00th=[ 8094], 80.00th=[ 9241], 90.00th=[11207], 95.00th=[12911], 00:34:55.718 | 99.00th=[21103], 99.50th=[21890], 99.90th=[23462], 99.95th=[23462], 00:34:55.718 | 99.99th=[23462] 00:34:55.718 write: IOPS=8660, BW=33.8MiB/s (35.5MB/s)(34.0MiB/1005msec); 0 zone resets 00:34:55.718 slat (nsec): min=1529, max=9168.4k, avg=50366.32, stdev=383468.50 00:34:55.718 clat (usec): min=1171, max=23469, avg=6881.69, stdev=2178.85 00:34:55.718 lat (usec): min=1183, max=23479, avg=6932.06, stdev=2197.24 00:34:55.718 clat percentiles (usec): 00:34:55.718 | 1.00th=[ 2024], 5.00th=[ 3949], 10.00th=[ 4293], 20.00th=[ 5080], 00:34:55.718 | 30.00th=[ 5735], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 7046], 00:34:55.718 | 70.00th=[ 7439], 80.00th=[ 8160], 90.00th=[ 9896], 95.00th=[10683], 00:34:55.718 | 99.00th=[13960], 99.50th=[14746], 99.90th=[17433], 99.95th=[17957], 00:34:55.718 | 99.99th=[23462] 00:34:55.718 bw ( KiB/s): min=34672, max=34960, per=33.44%, avg=34816.00, stdev=203.65, samples=2 00:34:55.718 iops : min= 8668, max= 8740, avg=8704.00, stdev=50.91, samples=2 00:34:55.718 lat (msec) : 2=0.49%, 4=3.02%, 10=83.17%, 20=12.75%, 50=0.57% 00:34:55.718 cpu : usr=6.08%, sys=7.47%, ctx=568, majf=0, minf=2 00:34:55.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:55.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:55.718 issued rwts: total=8457,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:55.718 job2: (groupid=0, jobs=1): err= 0: pid=3794775: Wed Nov 20 07:33:17 2024 00:34:55.718 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:34:55.718 slat (nsec): min=979, max=12181k, avg=83881.50, stdev=459398.58 00:34:55.718 clat (usec): min=6714, max=51414, avg=10173.83, stdev=4436.33 00:34:55.718 lat (usec): min=6851, max=51421, avg=10257.71, stdev=4467.92 00:34:55.718 clat percentiles (usec): 00:34:55.718 | 1.00th=[ 7242], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 8979], 00:34:55.718 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9503], 00:34:55.718 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10814], 95.00th=[11207], 00:34:55.718 | 99.00th=[40109], 99.50th=[43254], 99.90th=[51643], 99.95th=[51643], 00:34:55.718 | 99.99th=[51643] 00:34:55.718 write: IOPS=5704, BW=22.3MiB/s (23.4MB/s)(22.3MiB/1002msec); 0 zone resets 00:34:55.718 slat (nsec): min=1591, max=16219k, avg=89580.42, stdev=580002.09 00:34:55.718 clat (usec): min=1478, max=51690, avg=12036.99, stdev=9514.12 00:34:55.718 lat (usec): min=2537, max=51699, avg=12126.57, stdev=9557.11 00:34:55.718 clat percentiles (usec): 00:34:55.718 | 1.00th=[ 5014], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 7963], 00:34:55.718 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:34:55.718 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[29230], 95.00th=[36963], 00:34:55.718 | 99.00th=[44827], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:34:55.718 | 99.99th=[51643] 00:34:55.718 bw ( KiB/s): min=16384, max=28672, per=21.64%, avg=22528.00, stdev=8688.93, samples=2 00:34:55.718 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:34:55.718 lat (msec) : 2=0.01%, 4=0.28%, 10=76.46%, 20=16.50%, 50=6.24% 00:34:55.718 lat (msec) : 100=0.51% 00:34:55.718 cpu : usr=2.40%, sys=3.20%, ctx=785, majf=0, minf=1 00:34:55.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:55.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:55.718 issued rwts: total=5632,5716,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:55.718 job3: (groupid=0, jobs=1): err= 0: pid=3794776: Wed Nov 20 07:33:17 2024 00:34:55.718 read: IOPS=6512, BW=25.4MiB/s (26.7MB/s)(25.6MiB/1006msec) 00:34:55.718 slat (nsec): min=948, max=15563k, avg=81334.53, stdev=645427.02 00:34:55.718 clat (usec): min=2990, max=37508, avg=10584.79, stdev=3820.03 00:34:55.718 lat (usec): min=2996, max=37523, avg=10666.12, stdev=3851.10 00:34:55.718 clat percentiles (usec): 00:34:55.718 | 1.00th=[ 4424], 5.00th=[ 6259], 10.00th=[ 7111], 20.00th=[ 8160], 00:34:55.718 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10159], 00:34:55.718 | 70.00th=[11207], 80.00th=[12387], 90.00th=[14746], 95.00th=[17171], 00:34:55.718 | 99.00th=[26084], 99.50th=[27132], 99.90th=[29230], 99.95th=[29230], 00:34:55.718 | 99.99th=[37487] 00:34:55.718 write: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec); 0 zone resets 00:34:55.718 slat (nsec): min=1666, max=9740.2k, avg=58395.55, stdev=463453.78 00:34:55.718 clat (usec): min=728, max=26395, avg=8635.86, stdev=3190.36 00:34:55.718 lat (usec): min=748, max=26397, avg=8694.25, stdev=3210.84 00:34:55.718 clat percentiles (usec): 00:34:55.718 | 1.00th=[ 1958], 5.00th=[ 4555], 10.00th=[ 5211], 20.00th=[ 6063], 00:34:55.718 | 30.00th=[ 6915], 40.00th=[ 7439], 50.00th=[ 8160], 60.00th=[ 8979], 00:34:55.718 | 70.00th=[ 9896], 80.00th=[11338], 90.00th=[12518], 95.00th=[14746], 00:34:55.718 | 99.00th=[18482], 99.50th=[19006], 99.90th=[25822], 99.95th=[26346], 00:34:55.718 | 99.99th=[26346] 00:34:55.718 bw ( KiB/s): min=24576, max=28672, per=25.57%, avg=26624.00, stdev=2896.31, samples=2 00:34:55.718 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:34:55.718 lat (usec) : 750=0.02%, 1000=0.04% 00:34:55.718 lat (msec) : 2=0.49%, 4=1.62%, 10=61.68%, 20=34.36%, 50=1.79% 00:34:55.718 cpu : usr=4.98%, sys=6.77%, ctx=379, majf=0, minf=2 00:34:55.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:55.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:55.718 issued rwts: total=6552,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:55.718 00:34:55.718 Run status group 0 (all jobs): 00:34:55.718 READ: bw=98.0MiB/s (103MB/s), 17.9MiB/s-32.9MiB/s (18.8MB/s-34.5MB/s), io=98.6MiB (103MB), run=1002-1006msec 00:34:55.718 WRITE: bw=102MiB/s (107MB/s), 19.9MiB/s-33.8MiB/s (20.9MB/s-35.5MB/s), io=102MiB (107MB), run=1002-1006msec 00:34:55.718 00:34:55.718 Disk stats (read/write): 00:34:55.718 nvme0n1: ios=4008/4096, merge=0/0, ticks=33646/31047, in_queue=64693, util=99.40% 00:34:55.718 nvme0n2: ios=6705/6830, merge=0/0, ticks=49433/43689, in_queue=93122, util=93.15% 00:34:55.718 nvme0n3: ios=4068/4096, merge=0/0, ticks=11651/12153, in_queue=23804, util=97.66% 00:34:55.718 nvme0n4: ios=5153/5290, merge=0/0, ticks=50822/43160, in_queue=93982, util=99.43% 00:34:55.718 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:55.718 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3795113 00:34:55.718 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:55.718 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:55.718 [global] 00:34:55.718 thread=1 00:34:55.718 invalidate=1 00:34:55.718 rw=read 00:34:55.718 time_based=1 00:34:55.718 runtime=10 00:34:55.718 ioengine=libaio 00:34:55.718 direct=1 00:34:55.718 bs=4096 00:34:55.718 iodepth=1 00:34:55.718 norandommap=1 00:34:55.718 numjobs=1 00:34:55.718 00:34:55.718 [job0] 00:34:55.718 filename=/dev/nvme0n1 00:34:55.718 [job1] 00:34:55.718 filename=/dev/nvme0n2 00:34:55.718 [job2] 00:34:55.718 filename=/dev/nvme0n3 00:34:55.718 [job3] 00:34:55.718 filename=/dev/nvme0n4 00:34:55.718 Could not set queue depth (nvme0n1) 00:34:55.719 Could not set queue depth (nvme0n2) 00:34:55.719 Could not set queue depth (nvme0n3) 00:34:55.719 Could not set queue depth (nvme0n4) 00:34:55.988 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:55.989 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:55.989 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:55.989 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:55.989 fio-3.35 00:34:55.989 Starting 4 threads 00:34:58.530 07:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:58.790 07:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:58.790 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:34:58.790 fio: pid=3795303, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:58.790 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2400256, buflen=4096 00:34:58.790 fio: pid=3795302, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:58.790 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:58.790 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:59.050 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:59.050 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:59.050 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=294912, buflen=4096 00:34:59.050 fio: pid=3795299, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:59.314 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:59.314 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:59.314 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1581056, buflen=4096 00:34:59.314 fio: pid=3795300, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:59.314 00:34:59.314 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3795299: Wed Nov 20 07:33:21 2024 00:34:59.314 read: IOPS=24, BW=96.1KiB/s (98.4kB/s)(288KiB/2997msec) 00:34:59.315 slat (usec): min=24, max=2507, avg=59.30, stdev=290.48 00:34:59.315 clat (usec): min=1131, max=42140, avg=41253.70, stdev=4806.36 00:34:59.315 lat (usec): min=1173, max=43897, avg=41313.47, stdev=4814.22 00:34:59.315 clat percentiles (usec): 00:34:59.315 | 1.00th=[ 1139], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:59.315 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:59.315 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:59.315 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:59.315 | 99.99th=[42206] 00:34:59.315 bw ( KiB/s): min= 96, max= 96, per=6.85%, avg=96.00, stdev= 0.00, samples=5 00:34:59.315 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:34:59.315 lat (msec) : 2=1.37%, 50=97.26% 00:34:59.315 cpu : usr=0.10%, sys=0.00%, ctx=74, majf=0, minf=1 00:34:59.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.315 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.315 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:59.315 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3795300: Wed Nov 20 07:33:21 2024 00:34:59.315 read: IOPS=122, BW=488KiB/s (500kB/s)(1544KiB/3161msec) 00:34:59.315 slat (usec): min=6, max=34619, avg=198.12, stdev=2014.89 00:34:59.315 clat (usec): min=195, max=42108, avg=7927.73, stdev=15448.76 00:34:59.315 lat (usec): min=202, max=76009, avg=8126.29, stdev=15784.48 00:34:59.315 clat percentiles (usec): 00:34:59.315 | 1.00th=[ 355], 5.00th=[ 465], 10.00th=[ 506], 20.00th=[ 553], 00:34:59.315 | 30.00th=[ 611], 40.00th=[ 938], 50.00th=[ 1057], 60.00th=[ 1090], 00:34:59.315 | 70.00th=[ 1139], 80.00th=[ 1205], 90.00th=[41681], 95.00th=[42206], 00:34:59.315 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:59.315 | 99.99th=[42206] 00:34:59.316 bw ( KiB/s): min= 88, max= 1008, per=34.69%, avg=486.67, stdev=427.16, samples=6 00:34:59.316 iops : min= 22, max= 252, avg=121.67, stdev=106.79, samples=6 00:34:59.316 lat (usec) : 250=0.26%, 500=9.04%, 750=22.74%, 1000=12.92% 00:34:59.316 lat (msec) : 2=37.21%, 4=0.26%, 50=17.31% 00:34:59.316 cpu : usr=0.16%, sys=0.35%, ctx=391, majf=0, minf=2 00:34:59.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.316 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.316 issued rwts: total=387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:59.316 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3795302: Wed Nov 20 07:33:21 2024 00:34:59.316 read: IOPS=211, BW=844KiB/s (865kB/s)(2344KiB/2776msec) 00:34:59.316 slat (usec): min=7, max=21163, avg=87.33, stdev=1065.75 00:34:59.316 clat (usec): min=469, max=42122, avg=4606.29, stdev=11467.95 00:34:59.316 lat (usec): min=477, max=42150, avg=4693.72, stdev=11499.06 00:34:59.316 clat percentiles (usec): 00:34:59.316 | 1.00th=[ 619], 5.00th=[ 742], 10.00th=[ 848], 20.00th=[ 1004], 00:34:59.316 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1139], 00:34:59.316 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1303], 95.00th=[41681], 00:34:59.316 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:59.316 | 99.99th=[42206] 00:34:59.316 bw ( KiB/s): min= 96, max= 2776, per=60.96%, avg=854.40, stdev=1109.63, samples=5 00:34:59.316 iops : min= 24, max= 694, avg=213.60, stdev=277.41, samples=5 00:34:59.316 lat (usec) : 500=0.17%, 750=6.13%, 1000=13.29% 00:34:59.316 lat (msec) : 2=71.55%, 50=8.69% 00:34:59.316 cpu : usr=0.22%, sys=0.68%, ctx=589, majf=0, minf=2 00:34:59.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.316 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.316 issued rwts: total=587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:59.316 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3795303: Wed Nov 20 07:33:21 2024 00:34:59.317 read: IOPS=24, BW=96.4KiB/s (98.7kB/s)(252KiB/2615msec) 00:34:59.317 slat (nsec): min=23138, max=40167, avg=26065.69, stdev=1961.18 00:34:59.317 clat (usec): min=690, max=42116, avg=41142.48, stdev=5190.36 00:34:59.317 lat (usec): min=731, max=42142, avg=41168.55, stdev=5188.55 00:34:59.317 clat percentiles (usec): 00:34:59.317 | 1.00th=[ 693], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:59.317 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:59.317 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:59.317 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:59.317 | 99.99th=[42206] 00:34:59.317 bw ( KiB/s): min= 96, max= 96, per=6.85%, avg=96.00, stdev= 0.00, samples=5 00:34:59.317 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:34:59.317 lat (usec) : 750=1.56% 00:34:59.317 lat (msec) : 50=96.88% 00:34:59.317 cpu : usr=0.11%, sys=0.00%, ctx=64, majf=0, minf=2 00:34:59.317 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.317 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.317 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.317 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:59.317 00:34:59.317 Run status group 0 (all jobs): 00:34:59.317 READ: bw=1401KiB/s (1434kB/s), 96.1KiB/s-844KiB/s (98.4kB/s-865kB/s), io=4428KiB (4534kB), run=2615-3161msec 00:34:59.317 00:34:59.317 Disk stats (read/write): 00:34:59.319 nvme0n1: ios=68/0, merge=0/0, ticks=2805/0, in_queue=2805, util=94.72% 00:34:59.319 nvme0n2: ios=384/0, merge=0/0, ticks=2962/0, in_queue=2962, util=93.65% 00:34:59.319 nvme0n3: ios=536/0, merge=0/0, ticks=2551/0, in_queue=2551, util=96.03% 00:34:59.319 nvme0n4: ios=62/0, merge=0/0, ticks=2552/0, in_queue=2552, util=96.46% 00:34:59.319 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:59.319 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:59.584 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:59.584 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:59.844 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:59.844 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:59.844 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:59.844 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:00.104 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:00.104 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3795113 00:35:00.104 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:00.105 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:00.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:00.105 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:00.105 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:35:00.105 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:35:00.105 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:00.365 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:35:00.365 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:00.365 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:35:00.365 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:00.365 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:00.365 nvmf hotplug test: fio failed as expected 00:35:00.365 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:00.365 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:00.365 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:00.365 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:00.365 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:00.365 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:00.365 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:00.365 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:00.365 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:00.366 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:00.366 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:00.366 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:00.366 rmmod nvme_tcp 00:35:00.366 rmmod nvme_fabrics 00:35:00.626 rmmod nvme_keyring 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3791486 ']' 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3791486 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3791486 ']' 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3791486 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3791486 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3791486' 00:35:00.626 killing process with pid 3791486 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3791486 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3791486 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:00.626 07:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.167 07:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:03.167 00:35:03.167 real 0m27.673s 00:35:03.167 user 2m28.407s 00:35:03.167 sys 0m11.969s 00:35:03.167 07:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:03.167 07:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:03.167 ************************************ 00:35:03.167 END TEST nvmf_fio_target 00:35:03.167 ************************************ 00:35:03.167 07:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:03.167 07:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:03.167 07:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:03.167 07:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:03.167 ************************************ 00:35:03.167 START TEST nvmf_bdevio 00:35:03.167 ************************************ 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:03.167 * Looking for test storage... 00:35:03.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:03.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.167 --rc genhtml_branch_coverage=1 00:35:03.167 --rc genhtml_function_coverage=1 00:35:03.167 --rc genhtml_legend=1 00:35:03.167 --rc geninfo_all_blocks=1 00:35:03.167 --rc geninfo_unexecuted_blocks=1 00:35:03.167 00:35:03.167 ' 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:03.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.167 --rc genhtml_branch_coverage=1 00:35:03.167 --rc genhtml_function_coverage=1 00:35:03.167 --rc genhtml_legend=1 00:35:03.167 --rc geninfo_all_blocks=1 00:35:03.167 --rc geninfo_unexecuted_blocks=1 00:35:03.167 00:35:03.167 ' 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:03.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.167 --rc genhtml_branch_coverage=1 00:35:03.167 --rc genhtml_function_coverage=1 00:35:03.167 --rc genhtml_legend=1 00:35:03.167 --rc geninfo_all_blocks=1 00:35:03.167 --rc geninfo_unexecuted_blocks=1 00:35:03.167 00:35:03.167 ' 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:03.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.167 --rc genhtml_branch_coverage=1 00:35:03.167 --rc genhtml_function_coverage=1 00:35:03.167 --rc genhtml_legend=1 00:35:03.167 --rc geninfo_all_blocks=1 00:35:03.167 --rc geninfo_unexecuted_blocks=1 00:35:03.167 00:35:03.167 ' 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:03.167 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:03.168 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:11.310 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:11.310 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:11.310 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:11.310 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.310 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:11.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:11.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:35:11.311 00:35:11.311 --- 10.0.0.2 ping statistics --- 00:35:11.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.311 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:11.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:11.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:35:11.311 00:35:11.311 --- 10.0.0.1 ping statistics --- 00:35:11.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.311 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3800335 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3800335 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3800335 ']' 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:11.311 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:11.311 [2024-11-20 07:33:32.743180] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:11.311 [2024-11-20 07:33:32.744301] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:35:11.311 [2024-11-20 07:33:32.744349] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:11.311 [2024-11-20 07:33:32.843496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:11.311 [2024-11-20 07:33:32.896823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:11.311 [2024-11-20 07:33:32.896873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:11.311 [2024-11-20 07:33:32.896882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:11.311 [2024-11-20 07:33:32.896889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:11.311 [2024-11-20 07:33:32.896896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:11.311 [2024-11-20 07:33:32.898963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:11.311 [2024-11-20 07:33:32.899124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:11.311 [2024-11-20 07:33:32.899268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:11.311 [2024-11-20 07:33:32.899420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:11.312 [2024-11-20 07:33:32.975923] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:11.312 [2024-11-20 07:33:32.977114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:11.312 [2024-11-20 07:33:32.977189] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:11.312 [2024-11-20 07:33:32.977795] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:11.312 [2024-11-20 07:33:32.977870] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:11.312 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:11.312 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:35:11.312 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:11.312 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:11.312 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:11.573 [2024-11-20 07:33:33.608343] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:11.573 Malloc0 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:11.573 [2024-11-20 07:33:33.700469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:11.573 { 00:35:11.573 "params": { 00:35:11.573 "name": "Nvme$subsystem", 00:35:11.573 "trtype": "$TEST_TRANSPORT", 00:35:11.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:11.573 "adrfam": "ipv4", 00:35:11.573 "trsvcid": "$NVMF_PORT", 00:35:11.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:11.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:11.573 "hdgst": ${hdgst:-false}, 00:35:11.573 "ddgst": ${ddgst:-false} 00:35:11.573 }, 00:35:11.573 "method": "bdev_nvme_attach_controller" 00:35:11.573 } 00:35:11.573 EOF 00:35:11.573 )") 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:11.573 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:11.573 "params": { 00:35:11.573 "name": "Nvme1", 00:35:11.573 "trtype": "tcp", 00:35:11.573 "traddr": "10.0.0.2", 00:35:11.573 "adrfam": "ipv4", 00:35:11.573 "trsvcid": "4420", 00:35:11.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:11.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:11.573 "hdgst": false, 00:35:11.573 "ddgst": false 00:35:11.573 }, 00:35:11.573 "method": "bdev_nvme_attach_controller" 00:35:11.573 }' 00:35:11.573 [2024-11-20 07:33:33.758146] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:35:11.573 [2024-11-20 07:33:33.758227] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3800682 ] 00:35:11.842 [2024-11-20 07:33:33.851334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:11.842 [2024-11-20 07:33:33.908197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.842 [2024-11-20 07:33:33.908292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.842 [2024-11-20 07:33:33.908292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:12.182 I/O targets: 00:35:12.182 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:12.182 00:35:12.182 00:35:12.182 CUnit - A unit testing framework for C - Version 2.1-3 00:35:12.182 http://cunit.sourceforge.net/ 00:35:12.182 00:35:12.182 00:35:12.182 Suite: bdevio tests on: Nvme1n1 00:35:12.182 Test: blockdev write read block ...passed 00:35:12.182 Test: blockdev write zeroes read block ...passed 00:35:12.182 Test: blockdev write zeroes read no split ...passed 00:35:12.182 Test: blockdev write zeroes read split ...passed 00:35:12.182 Test: blockdev write zeroes read split partial ...passed 00:35:12.182 Test: blockdev reset ...[2024-11-20 07:33:34.323033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:12.182 [2024-11-20 07:33:34.323125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94e970 (9): Bad file descriptor 00:35:12.182 [2024-11-20 07:33:34.330343] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:12.182 passed 00:35:12.182 Test: blockdev write read 8 blocks ...passed 00:35:12.182 Test: blockdev write read size > 128k ...passed 00:35:12.182 Test: blockdev write read invalid size ...passed 00:35:12.182 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:12.182 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:12.182 Test: blockdev write read max offset ...passed 00:35:12.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:12.444 Test: blockdev writev readv 8 blocks ...passed 00:35:12.444 Test: blockdev writev readv 30 x 1block ...passed 00:35:12.444 Test: blockdev writev readv block ...passed 00:35:12.444 Test: blockdev writev readv size > 128k ...passed 00:35:12.444 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:12.444 Test: blockdev comparev and writev ...[2024-11-20 07:33:34.557702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:12.444 [2024-11-20 07:33:34.557754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.444 [2024-11-20 07:33:34.557771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:12.444 [2024-11-20 07:33:34.557780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.444 [2024-11-20 07:33:34.558382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:12.444 [2024-11-20 07:33:34.558398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:12.444 [2024-11-20 07:33:34.558412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:12.444 [2024-11-20 07:33:34.558422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:12.444 [2024-11-20 07:33:34.559029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:12.444 [2024-11-20 07:33:34.559042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:12.444 [2024-11-20 07:33:34.559056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:12.444 [2024-11-20 07:33:34.559064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:12.444 [2024-11-20 07:33:34.559681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:12.444 [2024-11-20 07:33:34.559694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:12.444 [2024-11-20 07:33:34.559709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:12.444 [2024-11-20 07:33:34.559717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:12.444 passed 00:35:12.444 Test: blockdev nvme passthru rw ...passed 00:35:12.444 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:33:34.644087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:12.444 [2024-11-20 07:33:34.644104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:12.444 [2024-11-20 07:33:34.644499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:12.444 [2024-11-20 07:33:34.644512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:12.444 [2024-11-20 07:33:34.644897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:12.444 [2024-11-20 07:33:34.644908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:12.444 [2024-11-20 07:33:34.645295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:12.444 [2024-11-20 07:33:34.645309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:12.444 passed 00:35:12.444 Test: blockdev nvme admin passthru ...passed 00:35:12.444 Test: blockdev copy ...passed 00:35:12.444 00:35:12.444 Run Summary: Type Total Ran Passed Failed Inactive 00:35:12.444 suites 1 1 n/a 0 0 00:35:12.444 tests 23 23 23 0 0 00:35:12.444 asserts 152 152 152 0 n/a 00:35:12.444 00:35:12.444 Elapsed time = 1.034 seconds 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:12.705 rmmod nvme_tcp 00:35:12.705 rmmod nvme_fabrics 00:35:12.705 rmmod nvme_keyring 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3800335 ']' 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3800335 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3800335 ']' 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3800335 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:12.705 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3800335 00:35:12.965 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:35:12.965 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:35:12.965 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3800335' 00:35:12.965 killing process with pid 3800335 00:35:12.965 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3800335 00:35:12.965 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3800335 00:35:12.965 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:12.965 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:12.965 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:12.965 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:12.965 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:12.965 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:12.965 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:12.965 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:12.965 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:12.965 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.965 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:12.965 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.506 07:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:15.506 00:35:15.506 real 0m12.235s 00:35:15.506 user 0m9.884s 00:35:15.506 sys 0m6.410s 00:35:15.506 07:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:15.506 07:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:15.506 ************************************ 00:35:15.506 END TEST nvmf_bdevio 00:35:15.506 ************************************ 00:35:15.507 07:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:15.507 00:35:15.507 real 5m0.584s 00:35:15.507 user 10m31.927s 00:35:15.507 sys 2m5.320s 00:35:15.507 07:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:15.507 07:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:15.507 ************************************ 00:35:15.507 END TEST nvmf_target_core_interrupt_mode 00:35:15.507 ************************************ 00:35:15.507 07:33:37 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:15.507 07:33:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:15.507 07:33:37 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:15.507 07:33:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:15.507 ************************************ 00:35:15.507 START TEST nvmf_interrupt 00:35:15.507 ************************************ 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:15.507 * Looking for test storage... 00:35:15.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:15.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.507 --rc genhtml_branch_coverage=1 00:35:15.507 --rc genhtml_function_coverage=1 00:35:15.507 --rc genhtml_legend=1 00:35:15.507 --rc geninfo_all_blocks=1 00:35:15.507 --rc geninfo_unexecuted_blocks=1 00:35:15.507 00:35:15.507 ' 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:15.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.507 --rc genhtml_branch_coverage=1 00:35:15.507 --rc genhtml_function_coverage=1 00:35:15.507 --rc genhtml_legend=1 00:35:15.507 --rc geninfo_all_blocks=1 00:35:15.507 --rc geninfo_unexecuted_blocks=1 00:35:15.507 00:35:15.507 ' 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:15.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.507 --rc genhtml_branch_coverage=1 00:35:15.507 --rc genhtml_function_coverage=1 00:35:15.507 --rc genhtml_legend=1 00:35:15.507 --rc geninfo_all_blocks=1 00:35:15.507 --rc geninfo_unexecuted_blocks=1 00:35:15.507 00:35:15.507 ' 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:15.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.507 --rc genhtml_branch_coverage=1 00:35:15.507 --rc genhtml_function_coverage=1 00:35:15.507 --rc genhtml_legend=1 00:35:15.507 --rc geninfo_all_blocks=1 00:35:15.507 --rc geninfo_unexecuted_blocks=1 00:35:15.507 00:35:15.507 ' 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:15.507 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:15.508 07:33:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:23.648 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:23.648 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:23.648 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:23.648 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:23.648 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:23.649 07:33:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:23.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:23.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:35:23.649 00:35:23.649 --- 10.0.0.2 ping statistics --- 00:35:23.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.649 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:23.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:23.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:35:23.649 00:35:23.649 --- 10.0.0.1 ping statistics --- 00:35:23.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.649 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3805038 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3805038 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 3805038 ']' 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:23.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:23.649 [2024-11-20 07:33:45.162456] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:23.649 [2024-11-20 07:33:45.163592] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:35:23.649 [2024-11-20 07:33:45.163642] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:23.649 [2024-11-20 07:33:45.239165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:23.649 [2024-11-20 07:33:45.285433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:23.649 [2024-11-20 07:33:45.285485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:23.649 [2024-11-20 07:33:45.285493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:23.649 [2024-11-20 07:33:45.285498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:23.649 [2024-11-20 07:33:45.285502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:23.649 [2024-11-20 07:33:45.286971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:23.649 [2024-11-20 07:33:45.286973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:23.649 [2024-11-20 07:33:45.358593] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:23.649 [2024-11-20 07:33:45.358910] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:23.649 [2024-11-20 07:33:45.359330] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:23.649 5000+0 records in 00:35:23.649 5000+0 records out 00:35:23.649 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0196412 s, 521 MB/s 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:23.649 AIO0 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:23.649 [2024-11-20 07:33:45.531985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:23.649 [2024-11-20 07:33:45.576458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3805038 0 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3805038 0 idle 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3805038 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:23.649 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3805038 -w 256 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3805038 root 20 0 128.2g 44928 32256 R 0.0 0.0 0:00.26 reactor_0' 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3805038 root 20 0 128.2g 44928 32256 R 0.0 0.0 0:00.26 reactor_0 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3805038 1 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3805038 1 idle 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3805038 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3805038 -w 256 00:35:23.650 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:23.910 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3805043 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:23.910 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3805043 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:23.910 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:23.910 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:23.910 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:23.910 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:23.910 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3805148 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3805038 0 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3805038 0 busy 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3805038 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3805038 -w 256 00:35:23.911 07:33:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:23.911 07:33:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3805038 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.27 reactor_0' 00:35:23.911 07:33:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3805038 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.27 reactor_0 00:35:23.911 07:33:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:23.911 07:33:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:23.911 07:33:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:23.911 07:33:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:23.911 07:33:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:23.911 07:33:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:23.911 07:33:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3805038 -w 256 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3805038 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.56 reactor_0' 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3805038 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.56 reactor_0 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3805038 1 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3805038 1 busy 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3805038 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3805038 -w 256 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3805043 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:01.33 reactor_1' 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3805043 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:01.33 reactor_1 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:25.296 07:33:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3805148 00:35:35.292 Initializing NVMe Controllers 00:35:35.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:35.292 Controller IO queue size 256, less than required. 00:35:35.292 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:35.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:35.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:35.292 Initialization complete. Launching workers. 00:35:35.292 ======================================================== 00:35:35.292 Latency(us) 00:35:35.292 Device Information : IOPS MiB/s Average min max 00:35:35.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19345.70 75.57 13237.31 3914.20 33484.36 00:35:35.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19361.70 75.63 13223.55 8050.96 29783.65 00:35:35.292 ======================================================== 00:35:35.292 Total : 38707.40 151.20 13230.43 3914.20 33484.36 00:35:35.292 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3805038 0 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3805038 0 idle 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3805038 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3805038 -w 256 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3805038 root 20 0 128.2g 44928 32256 S 6.2 0.0 0:20.27 reactor_0' 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3805038 root 20 0 128.2g 44928 32256 S 6.2 0.0 0:20.27 reactor_0 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3805038 1 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3805038 1 idle 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3805038 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3805038 -w 256 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3805043 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3805043 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:35.292 07:33:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:35.292 07:33:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:35.292 07:33:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:35:35.292 07:33:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:35:35.292 07:33:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:35:35.292 07:33:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3805038 0 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3805038 0 idle 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3805038 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3805038 -w 256 00:35:37.201 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3805038 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.66 reactor_0' 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3805038 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.66 reactor_0 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3805038 1 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3805038 1 idle 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3805038 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3805038 -w 256 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3805043 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3805043 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:37.461 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:37.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:37.721 rmmod nvme_tcp 00:35:37.721 rmmod nvme_fabrics 00:35:37.721 rmmod nvme_keyring 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:37.721 07:33:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3805038 ']' 00:35:37.983 07:33:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3805038 00:35:37.983 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 3805038 ']' 00:35:37.983 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 3805038 00:35:37.983 07:33:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3805038 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3805038' 00:35:37.983 killing process with pid 3805038 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 3805038 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 3805038 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:37.983 07:34:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:37.984 07:34:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:37.984 07:34:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.984 07:34:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:37.984 07:34:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:40.529 07:34:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:40.529 00:35:40.529 real 0m24.922s 00:35:40.529 user 0m40.207s 00:35:40.529 sys 0m9.828s 00:35:40.529 07:34:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:40.529 07:34:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:40.529 ************************************ 00:35:40.529 END TEST nvmf_interrupt 00:35:40.529 ************************************ 00:35:40.529 00:35:40.529 real 30m9.723s 00:35:40.529 user 61m41.542s 00:35:40.529 sys 10m19.098s 00:35:40.529 07:34:02 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:40.529 07:34:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:40.529 ************************************ 00:35:40.529 END TEST nvmf_tcp 00:35:40.529 ************************************ 00:35:40.529 07:34:02 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:35:40.529 07:34:02 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:40.529 07:34:02 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:40.529 07:34:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:40.529 07:34:02 -- common/autotest_common.sh@10 -- # set +x 00:35:40.529 ************************************ 00:35:40.529 START TEST spdkcli_nvmf_tcp 00:35:40.529 ************************************ 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:40.529 * Looking for test storage... 00:35:40.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:40.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.529 --rc genhtml_branch_coverage=1 00:35:40.529 --rc genhtml_function_coverage=1 00:35:40.529 --rc genhtml_legend=1 00:35:40.529 --rc geninfo_all_blocks=1 00:35:40.529 --rc geninfo_unexecuted_blocks=1 00:35:40.529 00:35:40.529 ' 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:40.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.529 --rc genhtml_branch_coverage=1 00:35:40.529 --rc genhtml_function_coverage=1 00:35:40.529 --rc genhtml_legend=1 00:35:40.529 --rc geninfo_all_blocks=1 00:35:40.529 --rc geninfo_unexecuted_blocks=1 00:35:40.529 00:35:40.529 ' 00:35:40.529 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:40.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.529 --rc genhtml_branch_coverage=1 00:35:40.529 --rc genhtml_function_coverage=1 00:35:40.529 --rc genhtml_legend=1 00:35:40.529 --rc geninfo_all_blocks=1 00:35:40.529 --rc geninfo_unexecuted_blocks=1 00:35:40.529 00:35:40.529 ' 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:40.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.530 --rc genhtml_branch_coverage=1 00:35:40.530 --rc genhtml_function_coverage=1 00:35:40.530 --rc genhtml_legend=1 00:35:40.530 --rc geninfo_all_blocks=1 00:35:40.530 --rc geninfo_unexecuted_blocks=1 00:35:40.530 00:35:40.530 ' 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:40.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3808547 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3808547 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 3808547 ']' 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:40.530 07:34:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:40.530 [2024-11-20 07:34:02.728493] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:35:40.530 [2024-11-20 07:34:02.728561] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808547 ] 00:35:40.791 [2024-11-20 07:34:02.821387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:40.791 [2024-11-20 07:34:02.876404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.791 [2024-11-20 07:34:02.876440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.362 07:34:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:41.362 07:34:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:35:41.362 07:34:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:41.362 07:34:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:41.362 07:34:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:41.362 07:34:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:41.362 07:34:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:41.362 07:34:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:41.362 07:34:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:41.362 07:34:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:41.362 07:34:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:41.362 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:41.362 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:41.362 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:41.362 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:41.362 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:41.362 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:41.362 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:41.362 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:41.362 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:41.362 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:41.362 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:41.362 ' 00:35:44.668 [2024-11-20 07:34:06.339872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.611 [2024-11-20 07:34:07.704010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:48.160 [2024-11-20 07:34:10.231209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:50.705 [2024-11-20 07:34:12.457517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:52.089 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:52.089 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:52.089 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:52.089 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:52.089 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:52.089 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:52.090 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:52.090 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:52.090 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:52.090 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:52.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:52.090 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:52.090 07:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:52.090 07:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:52.090 07:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.090 07:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:52.090 07:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:52.090 07:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.090 07:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:52.090 07:34:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:52.662 07:34:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:52.662 07:34:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:52.662 07:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:52.662 07:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:52.662 07:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.662 07:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:52.662 07:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:52.662 07:34:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.662 07:34:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:52.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:52.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:52.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:52.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:52.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:52.662 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:52.662 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:52.662 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:52.662 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:52.662 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:52.662 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:52.662 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:52.662 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:52.662 ' 00:35:59.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:59.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:59.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:59.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:59.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:59.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:59.245 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:59.245 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:59.245 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:59.245 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:59.245 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:59.245 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:59.245 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:59.245 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3808547 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3808547 ']' 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3808547 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3808547 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3808547' 00:35:59.245 killing process with pid 3808547 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 3808547 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 3808547 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3808547 ']' 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3808547 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3808547 ']' 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3808547 00:35:59.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3808547) - No such process 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 3808547 is not found' 00:35:59.245 Process with pid 3808547 is not found 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:59.245 00:35:59.245 real 0m18.217s 00:35:59.245 user 0m40.450s 00:35:59.245 sys 0m0.926s 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:59.245 07:34:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:59.245 ************************************ 00:35:59.245 END TEST spdkcli_nvmf_tcp 00:35:59.245 ************************************ 00:35:59.245 07:34:20 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:59.245 07:34:20 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:59.245 07:34:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:59.245 07:34:20 -- common/autotest_common.sh@10 -- # set +x 00:35:59.245 ************************************ 00:35:59.245 START TEST nvmf_identify_passthru 00:35:59.245 ************************************ 00:35:59.245 07:34:20 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:59.245 * Looking for test storage... 00:35:59.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:59.245 07:34:20 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:59.245 07:34:20 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:35:59.245 07:34:20 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:59.245 07:34:20 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:59.245 07:34:20 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:59.245 07:34:20 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:59.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.245 --rc genhtml_branch_coverage=1 00:35:59.245 --rc genhtml_function_coverage=1 00:35:59.245 --rc genhtml_legend=1 00:35:59.245 --rc geninfo_all_blocks=1 00:35:59.245 --rc geninfo_unexecuted_blocks=1 00:35:59.245 00:35:59.245 ' 00:35:59.245 07:34:20 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:59.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.245 --rc genhtml_branch_coverage=1 00:35:59.245 --rc genhtml_function_coverage=1 00:35:59.245 --rc genhtml_legend=1 00:35:59.245 --rc geninfo_all_blocks=1 00:35:59.245 --rc geninfo_unexecuted_blocks=1 00:35:59.245 00:35:59.245 ' 00:35:59.245 07:34:20 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:59.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.245 --rc genhtml_branch_coverage=1 00:35:59.245 --rc genhtml_function_coverage=1 00:35:59.245 --rc genhtml_legend=1 00:35:59.245 --rc geninfo_all_blocks=1 00:35:59.245 --rc geninfo_unexecuted_blocks=1 00:35:59.245 00:35:59.245 ' 00:35:59.245 07:34:20 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:59.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.245 --rc genhtml_branch_coverage=1 00:35:59.245 --rc genhtml_function_coverage=1 00:35:59.245 --rc genhtml_legend=1 00:35:59.245 --rc geninfo_all_blocks=1 00:35:59.245 --rc geninfo_unexecuted_blocks=1 00:35:59.245 00:35:59.245 ' 00:35:59.245 07:34:20 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:59.245 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:59.245 07:34:20 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:59.246 07:34:20 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:59.246 07:34:20 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:59.246 07:34:20 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:59.246 07:34:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.246 07:34:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.246 07:34:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.246 07:34:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:59.246 07:34:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:59.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:59.246 07:34:20 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:59.246 07:34:20 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:59.246 07:34:20 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:59.246 07:34:20 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:59.246 07:34:20 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:59.246 07:34:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.246 07:34:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.246 07:34:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.246 07:34:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:59.246 07:34:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.246 07:34:20 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.246 07:34:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:59.246 07:34:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:59.246 07:34:20 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:59.246 07:34:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:05.833 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:05.833 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:05.833 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:05.833 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:05.833 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:05.834 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:05.834 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:05.834 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:05.834 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:05.834 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:05.834 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:05.834 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:05.834 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:05.834 07:34:27 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:05.834 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:05.834 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:05.834 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:05.834 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:06.095 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:06.095 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:06.095 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:06.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:06.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:36:06.095 00:36:06.095 --- 10.0.0.2 ping statistics --- 00:36:06.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.095 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:36:06.095 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:06.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:06.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:36:06.095 00:36:06.095 --- 10.0.0.1 ping statistics --- 00:36:06.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.095 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:36:06.095 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:06.095 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:06.095 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:06.095 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:06.095 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:06.095 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:06.095 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:06.095 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:06.095 07:34:28 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:06.095 07:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:06.095 07:34:28 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:06.095 07:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:06.095 07:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:06.095 07:34:28 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:36:06.095 07:34:28 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:36:06.095 07:34:28 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:36:06.095 07:34:28 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:36:06.095 07:34:28 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:36:06.095 07:34:28 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:36:06.095 07:34:28 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:06.095 07:34:28 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:06.095 07:34:28 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:36:06.095 07:34:28 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:36:06.095 07:34:28 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:36:06.095 07:34:28 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:36:06.095 07:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:06.095 07:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:06.095 07:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:06.095 07:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:06.095 07:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:06.668 07:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:36:06.668 07:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:06.668 07:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:06.668 07:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:07.241 07:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:07.241 07:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:07.241 07:34:29 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:07.241 07:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:07.241 07:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:07.241 07:34:29 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:07.241 07:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:07.241 07:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3815819 00:36:07.241 07:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:07.241 07:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:07.241 07:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3815819 00:36:07.241 07:34:29 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 3815819 ']' 00:36:07.241 07:34:29 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:07.241 07:34:29 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:07.241 07:34:29 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:07.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:07.241 07:34:29 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:07.241 07:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:07.241 [2024-11-20 07:34:29.404539] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:36:07.241 [2024-11-20 07:34:29.404612] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:07.242 [2024-11-20 07:34:29.502677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:07.503 [2024-11-20 07:34:29.557096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:07.503 [2024-11-20 07:34:29.557150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:07.503 [2024-11-20 07:34:29.557169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:07.503 [2024-11-20 07:34:29.557177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:07.503 [2024-11-20 07:34:29.557183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:07.503 [2024-11-20 07:34:29.559523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:07.503 [2024-11-20 07:34:29.559691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:07.503 [2024-11-20 07:34:29.559851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:07.503 [2024-11-20 07:34:29.559852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.076 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:08.076 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:36:08.076 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:08.076 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.076 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:08.076 INFO: Log level set to 20 00:36:08.076 INFO: Requests: 00:36:08.076 { 00:36:08.076 "jsonrpc": "2.0", 00:36:08.076 "method": "nvmf_set_config", 00:36:08.076 "id": 1, 00:36:08.076 "params": { 00:36:08.076 "admin_cmd_passthru": { 00:36:08.076 "identify_ctrlr": true 00:36:08.076 } 00:36:08.076 } 00:36:08.076 } 00:36:08.076 00:36:08.076 INFO: response: 00:36:08.076 { 00:36:08.076 "jsonrpc": "2.0", 00:36:08.076 "id": 1, 00:36:08.076 "result": true 00:36:08.076 } 00:36:08.076 00:36:08.076 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.076 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:08.076 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.076 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:08.076 INFO: Setting log level to 20 00:36:08.076 INFO: Setting log level to 20 00:36:08.076 INFO: Log level set to 20 00:36:08.076 INFO: Log level set to 20 00:36:08.076 INFO: Requests: 00:36:08.076 { 00:36:08.076 "jsonrpc": "2.0", 00:36:08.076 "method": "framework_start_init", 00:36:08.076 "id": 1 00:36:08.076 } 00:36:08.076 00:36:08.076 INFO: Requests: 00:36:08.076 { 00:36:08.076 "jsonrpc": "2.0", 00:36:08.076 "method": "framework_start_init", 00:36:08.076 "id": 1 00:36:08.076 } 00:36:08.076 00:36:08.076 [2024-11-20 07:34:30.313713] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:08.076 INFO: response: 00:36:08.076 { 00:36:08.076 "jsonrpc": "2.0", 00:36:08.076 "id": 1, 00:36:08.076 "result": true 00:36:08.076 } 00:36:08.076 00:36:08.076 INFO: response: 00:36:08.076 { 00:36:08.076 "jsonrpc": "2.0", 00:36:08.076 "id": 1, 00:36:08.076 "result": true 00:36:08.076 } 00:36:08.076 00:36:08.076 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.076 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:08.076 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.076 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:08.076 INFO: Setting log level to 40 00:36:08.076 INFO: Setting log level to 40 00:36:08.076 INFO: Setting log level to 40 00:36:08.076 [2024-11-20 07:34:30.327303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:08.076 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.076 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:08.076 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:08.076 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:08.338 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:08.338 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.338 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:08.671 Nvme0n1 00:36:08.671 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.671 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:08.671 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.671 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:08.671 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.671 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:08.671 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.671 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:08.671 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.671 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:08.671 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.671 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:08.671 [2024-11-20 07:34:30.733221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:08.671 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.671 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:08.671 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.671 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:08.671 [ 00:36:08.671 { 00:36:08.671 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:08.671 "subtype": "Discovery", 00:36:08.671 "listen_addresses": [], 00:36:08.671 "allow_any_host": true, 00:36:08.671 "hosts": [] 00:36:08.671 }, 00:36:08.671 { 00:36:08.671 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:08.671 "subtype": "NVMe", 00:36:08.671 "listen_addresses": [ 00:36:08.671 { 00:36:08.671 "trtype": "TCP", 00:36:08.671 "adrfam": "IPv4", 00:36:08.671 "traddr": "10.0.0.2", 00:36:08.671 "trsvcid": "4420" 00:36:08.671 } 00:36:08.671 ], 00:36:08.671 "allow_any_host": true, 00:36:08.671 "hosts": [], 00:36:08.671 "serial_number": "SPDK00000000000001", 00:36:08.671 "model_number": "SPDK bdev Controller", 00:36:08.671 "max_namespaces": 1, 00:36:08.671 "min_cntlid": 1, 00:36:08.671 "max_cntlid": 65519, 00:36:08.672 "namespaces": [ 00:36:08.672 { 00:36:08.672 "nsid": 1, 00:36:08.672 "bdev_name": "Nvme0n1", 00:36:08.672 "name": "Nvme0n1", 00:36:08.672 "nguid": "36344730526054870025384500000044", 00:36:08.672 "uuid": "36344730-5260-5487-0025-384500000044" 00:36:08.672 } 00:36:08.672 ] 00:36:08.672 } 00:36:08.672 ] 00:36:08.672 07:34:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.672 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:08.672 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:08.672 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:08.981 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:36:08.981 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:08.981 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:08.981 07:34:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:08.982 07:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:08.982 07:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:36:08.982 07:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:08.982 07:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:08.982 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.982 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:08.982 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.982 07:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:08.982 07:34:31 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:08.982 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:08.982 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:08.982 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:08.982 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:08.982 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:08.982 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:08.982 rmmod nvme_tcp 00:36:09.244 rmmod nvme_fabrics 00:36:09.244 rmmod nvme_keyring 00:36:09.244 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:09.244 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:09.244 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:09.244 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3815819 ']' 00:36:09.244 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3815819 00:36:09.244 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 3815819 ']' 00:36:09.244 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 3815819 00:36:09.244 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:36:09.244 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:09.244 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3815819 00:36:09.244 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:09.244 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:09.244 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3815819' 00:36:09.244 killing process with pid 3815819 00:36:09.244 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 3815819 00:36:09.244 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 3815819 00:36:09.505 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:09.505 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:09.505 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:09.505 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:09.505 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:09.505 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:09.505 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:09.505 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:09.505 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:09.505 07:34:31 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.505 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:09.505 07:34:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.052 07:34:33 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:12.052 00:36:12.052 real 0m13.017s 00:36:12.052 user 0m10.537s 00:36:12.052 sys 0m6.527s 00:36:12.052 07:34:33 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:12.052 07:34:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:12.052 ************************************ 00:36:12.052 END TEST nvmf_identify_passthru 00:36:12.052 ************************************ 00:36:12.052 07:34:33 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:12.052 07:34:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:12.052 07:34:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:12.052 07:34:33 -- common/autotest_common.sh@10 -- # set +x 00:36:12.052 ************************************ 00:36:12.052 START TEST nvmf_dif 00:36:12.052 ************************************ 00:36:12.052 07:34:33 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:12.052 * Looking for test storage... 00:36:12.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:12.053 07:34:33 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:12.053 07:34:33 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:36:12.053 07:34:33 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:12.053 07:34:33 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:12.053 07:34:33 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:12.053 07:34:33 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:12.053 07:34:33 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:12.053 07:34:33 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:12.053 07:34:33 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:12.053 07:34:34 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:12.053 07:34:34 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:12.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.053 --rc genhtml_branch_coverage=1 00:36:12.053 --rc genhtml_function_coverage=1 00:36:12.053 --rc genhtml_legend=1 00:36:12.053 --rc geninfo_all_blocks=1 00:36:12.053 --rc geninfo_unexecuted_blocks=1 00:36:12.053 00:36:12.053 ' 00:36:12.053 07:34:34 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:12.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.053 --rc genhtml_branch_coverage=1 00:36:12.053 --rc genhtml_function_coverage=1 00:36:12.053 --rc genhtml_legend=1 00:36:12.053 --rc geninfo_all_blocks=1 00:36:12.053 --rc geninfo_unexecuted_blocks=1 00:36:12.053 00:36:12.053 ' 00:36:12.053 07:34:34 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:12.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.053 --rc genhtml_branch_coverage=1 00:36:12.053 --rc genhtml_function_coverage=1 00:36:12.053 --rc genhtml_legend=1 00:36:12.053 --rc geninfo_all_blocks=1 00:36:12.053 --rc geninfo_unexecuted_blocks=1 00:36:12.053 00:36:12.053 ' 00:36:12.053 07:34:34 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:12.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.053 --rc genhtml_branch_coverage=1 00:36:12.053 --rc genhtml_function_coverage=1 00:36:12.053 --rc genhtml_legend=1 00:36:12.053 --rc geninfo_all_blocks=1 00:36:12.053 --rc geninfo_unexecuted_blocks=1 00:36:12.053 00:36:12.053 ' 00:36:12.053 07:34:34 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:12.053 07:34:34 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:12.053 07:34:34 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.053 07:34:34 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.053 07:34:34 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.053 07:34:34 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:12.053 07:34:34 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:12.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:12.053 07:34:34 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:12.053 07:34:34 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:12.053 07:34:34 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:12.053 07:34:34 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:12.053 07:34:34 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.053 07:34:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:12.053 07:34:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:12.053 07:34:34 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:12.053 07:34:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:20.201 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:20.201 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:20.201 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:20.201 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:20.201 07:34:41 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:20.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:20.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:36:20.202 00:36:20.202 --- 10.0.0.2 ping statistics --- 00:36:20.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:20.202 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:20.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:20.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:36:20.202 00:36:20.202 --- 10.0.0.1 ping statistics --- 00:36:20.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:20.202 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:20.202 07:34:41 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:22.753 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:22.753 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:22.753 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:23.014 07:34:45 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:23.014 07:34:45 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:23.014 07:34:45 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:23.014 07:34:45 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:23.014 07:34:45 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:23.014 07:34:45 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:23.275 07:34:45 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:23.275 07:34:45 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:23.275 07:34:45 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:23.275 07:34:45 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:23.275 07:34:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:23.275 07:34:45 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3821864 00:36:23.275 07:34:45 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3821864 00:36:23.275 07:34:45 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:23.275 07:34:45 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 3821864 ']' 00:36:23.275 07:34:45 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.275 07:34:45 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:23.275 07:34:45 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.275 07:34:45 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:23.275 07:34:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:23.275 [2024-11-20 07:34:45.395872] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:36:23.275 [2024-11-20 07:34:45.395939] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:23.275 [2024-11-20 07:34:45.494235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.275 [2024-11-20 07:34:45.537354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:23.276 [2024-11-20 07:34:45.537387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:23.276 [2024-11-20 07:34:45.537395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:23.276 [2024-11-20 07:34:45.537403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:23.276 [2024-11-20 07:34:45.537408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:23.276 [2024-11-20 07:34:45.538015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.216 07:34:46 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:24.216 07:34:46 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:36:24.216 07:34:46 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:24.216 07:34:46 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:24.216 07:34:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:24.216 07:34:46 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:24.216 07:34:46 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:24.216 07:34:46 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:24.216 07:34:46 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.216 07:34:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:24.216 [2024-11-20 07:34:46.227152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:24.216 07:34:46 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.216 07:34:46 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:24.216 07:34:46 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:24.216 07:34:46 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:24.216 07:34:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:24.216 ************************************ 00:36:24.216 START TEST fio_dif_1_default 00:36:24.216 ************************************ 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:24.216 bdev_null0 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:24.216 [2024-11-20 07:34:46.315525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:24.216 { 00:36:24.216 "params": { 00:36:24.216 "name": "Nvme$subsystem", 00:36:24.216 "trtype": "$TEST_TRANSPORT", 00:36:24.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:24.216 "adrfam": "ipv4", 00:36:24.216 "trsvcid": "$NVMF_PORT", 00:36:24.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:24.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:24.216 "hdgst": ${hdgst:-false}, 00:36:24.216 "ddgst": ${ddgst:-false} 00:36:24.216 }, 00:36:24.216 "method": "bdev_nvme_attach_controller" 00:36:24.216 } 00:36:24.216 EOF 00:36:24.216 )") 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:24.216 "params": { 00:36:24.216 "name": "Nvme0", 00:36:24.216 "trtype": "tcp", 00:36:24.216 "traddr": "10.0.0.2", 00:36:24.216 "adrfam": "ipv4", 00:36:24.216 "trsvcid": "4420", 00:36:24.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:24.216 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:24.216 "hdgst": false, 00:36:24.216 "ddgst": false 00:36:24.216 }, 00:36:24.216 "method": "bdev_nvme_attach_controller" 00:36:24.216 }' 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:24.216 07:34:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:24.921 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:24.921 fio-3.35 00:36:24.921 Starting 1 thread 00:36:37.156 00:36:37.157 filename0: (groupid=0, jobs=1): err= 0: pid=3822397: Wed Nov 20 07:34:57 2024 00:36:37.157 read: IOPS=190, BW=760KiB/s (779kB/s)(7632KiB/10038msec) 00:36:37.157 slat (nsec): min=5456, max=35844, avg=6256.60, stdev=1903.53 00:36:37.157 clat (usec): min=441, max=41618, avg=21025.63, stdev=20257.96 00:36:37.157 lat (usec): min=447, max=41644, avg=21031.88, stdev=20257.93 00:36:37.157 clat percentiles (usec): 00:36:37.157 | 1.00th=[ 627], 5.00th=[ 693], 10.00th=[ 701], 20.00th=[ 709], 00:36:37.157 | 30.00th=[ 717], 40.00th=[ 766], 50.00th=[40633], 60.00th=[41157], 00:36:37.157 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:37.157 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:36:37.157 | 99.99th=[41681] 00:36:37.157 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.60, stdev=19.70, samples=20 00:36:37.157 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:36:37.157 lat (usec) : 500=0.21%, 750=37.84%, 1000=11.64% 00:36:37.157 lat (msec) : 2=0.21%, 50=50.10% 00:36:37.157 cpu : usr=93.27%, sys=6.51%, ctx=29, majf=0, minf=232 00:36:37.157 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.157 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.157 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:37.157 00:36:37.157 Run status group 0 (all jobs): 00:36:37.157 READ: bw=760KiB/s (779kB/s), 760KiB/s-760KiB/s (779kB/s-779kB/s), io=7632KiB (7815kB), run=10038-10038msec 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.157 00:36:37.157 real 0m11.343s 00:36:37.157 user 0m26.931s 00:36:37.157 sys 0m1.012s 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:37.157 ************************************ 00:36:37.157 END TEST fio_dif_1_default 00:36:37.157 ************************************ 00:36:37.157 07:34:57 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:37.157 07:34:57 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:37.157 07:34:57 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:37.157 07:34:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:37.157 ************************************ 00:36:37.157 START TEST fio_dif_1_multi_subsystems 00:36:37.157 ************************************ 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.157 bdev_null0 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.157 [2024-11-20 07:34:57.741582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.157 bdev_null1 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:37.157 { 00:36:37.157 "params": { 00:36:37.157 "name": "Nvme$subsystem", 00:36:37.157 "trtype": "$TEST_TRANSPORT", 00:36:37.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:37.157 "adrfam": "ipv4", 00:36:37.157 "trsvcid": "$NVMF_PORT", 00:36:37.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:37.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:37.157 "hdgst": ${hdgst:-false}, 00:36:37.157 "ddgst": ${ddgst:-false} 00:36:37.157 }, 00:36:37.157 "method": "bdev_nvme_attach_controller" 00:36:37.157 } 00:36:37.157 EOF 00:36:37.157 )") 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:36:37.157 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:37.158 { 00:36:37.158 "params": { 00:36:37.158 "name": "Nvme$subsystem", 00:36:37.158 "trtype": "$TEST_TRANSPORT", 00:36:37.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:37.158 "adrfam": "ipv4", 00:36:37.158 "trsvcid": "$NVMF_PORT", 00:36:37.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:37.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:37.158 "hdgst": ${hdgst:-false}, 00:36:37.158 "ddgst": ${ddgst:-false} 00:36:37.158 }, 00:36:37.158 "method": "bdev_nvme_attach_controller" 00:36:37.158 } 00:36:37.158 EOF 00:36:37.158 )") 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:37.158 "params": { 00:36:37.158 "name": "Nvme0", 00:36:37.158 "trtype": "tcp", 00:36:37.158 "traddr": "10.0.0.2", 00:36:37.158 "adrfam": "ipv4", 00:36:37.158 "trsvcid": "4420", 00:36:37.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:37.158 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:37.158 "hdgst": false, 00:36:37.158 "ddgst": false 00:36:37.158 }, 00:36:37.158 "method": "bdev_nvme_attach_controller" 00:36:37.158 },{ 00:36:37.158 "params": { 00:36:37.158 "name": "Nvme1", 00:36:37.158 "trtype": "tcp", 00:36:37.158 "traddr": "10.0.0.2", 00:36:37.158 "adrfam": "ipv4", 00:36:37.158 "trsvcid": "4420", 00:36:37.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:37.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:37.158 "hdgst": false, 00:36:37.158 "ddgst": false 00:36:37.158 }, 00:36:37.158 "method": "bdev_nvme_attach_controller" 00:36:37.158 }' 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:37.158 07:34:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:37.158 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:37.158 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:37.158 fio-3.35 00:36:37.158 Starting 2 threads 00:36:47.154 00:36:47.154 filename0: (groupid=0, jobs=1): err= 0: pid=3824878: Wed Nov 20 07:35:08 2024 00:36:47.154 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10002msec) 00:36:47.154 slat (nsec): min=5467, max=41271, avg=6643.56, stdev=2063.32 00:36:47.154 clat (usec): min=388, max=42857, avg=20949.89, stdev=20191.45 00:36:47.154 lat (usec): min=394, max=42877, avg=20956.53, stdev=20191.20 00:36:47.154 clat percentiles (usec): 00:36:47.154 | 1.00th=[ 553], 5.00th=[ 701], 10.00th=[ 775], 20.00th=[ 816], 00:36:47.154 | 30.00th=[ 832], 40.00th=[ 848], 50.00th=[ 1975], 60.00th=[41157], 00:36:47.154 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:47.154 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:36:47.154 | 99.99th=[42730] 00:36:47.154 bw ( KiB/s): min= 704, max= 768, per=66.31%, avg=764.63, stdev=14.68, samples=19 00:36:47.154 iops : min= 176, max= 192, avg=191.16, stdev= 3.67, samples=19 00:36:47.154 lat (usec) : 500=0.84%, 750=7.18%, 1000=41.30% 00:36:47.154 lat (msec) : 2=0.79%, 50=49.90% 00:36:47.154 cpu : usr=95.82%, sys=3.95%, ctx=18, majf=0, minf=193 00:36:47.154 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:47.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.154 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:47.154 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:47.154 filename1: (groupid=0, jobs=1): err= 0: pid=3824879: Wed Nov 20 07:35:08 2024 00:36:47.154 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:36:47.154 slat (nsec): min=5465, max=32779, avg=6950.46, stdev=2152.90 00:36:47.154 clat (usec): min=40812, max=42095, avg=41016.94, stdev=193.82 00:36:47.154 lat (usec): min=40820, max=42102, avg=41023.89, stdev=194.34 00:36:47.154 clat percentiles (usec): 00:36:47.154 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:47.154 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:47.154 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:47.154 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:47.154 | 99.99th=[42206] 00:36:47.154 bw ( KiB/s): min= 384, max= 416, per=33.68%, avg=388.80, stdev=11.72, samples=20 00:36:47.154 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:47.154 lat (msec) : 50=100.00% 00:36:47.154 cpu : usr=95.75%, sys=4.03%, ctx=14, majf=0, minf=104 00:36:47.154 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:47.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.154 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:47.154 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:47.154 00:36:47.154 Run status group 0 (all jobs): 00:36:47.154 READ: bw=1152KiB/s (1180kB/s), 390KiB/s-763KiB/s (399kB/s-781kB/s), io=11.3MiB (11.8MB), run=10002-10013msec 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.155 00:36:47.155 real 0m11.445s 00:36:47.155 user 0m34.236s 00:36:47.155 sys 0m1.181s 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:47.155 07:35:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:47.155 ************************************ 00:36:47.155 END TEST fio_dif_1_multi_subsystems 00:36:47.155 ************************************ 00:36:47.155 07:35:09 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:47.155 07:35:09 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:47.155 07:35:09 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:47.155 07:35:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:47.155 ************************************ 00:36:47.155 START TEST fio_dif_rand_params 00:36:47.155 ************************************ 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.155 bdev_null0 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.155 [2024-11-20 07:35:09.265728] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:47.155 { 00:36:47.155 "params": { 00:36:47.155 "name": "Nvme$subsystem", 00:36:47.155 "trtype": "$TEST_TRANSPORT", 00:36:47.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:47.155 "adrfam": "ipv4", 00:36:47.155 "trsvcid": "$NVMF_PORT", 00:36:47.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:47.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:47.155 "hdgst": ${hdgst:-false}, 00:36:47.155 "ddgst": ${ddgst:-false} 00:36:47.155 }, 00:36:47.155 "method": "bdev_nvme_attach_controller" 00:36:47.155 } 00:36:47.155 EOF 00:36:47.155 )") 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:47.155 "params": { 00:36:47.155 "name": "Nvme0", 00:36:47.155 "trtype": "tcp", 00:36:47.155 "traddr": "10.0.0.2", 00:36:47.155 "adrfam": "ipv4", 00:36:47.155 "trsvcid": "4420", 00:36:47.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:47.155 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:47.155 "hdgst": false, 00:36:47.155 "ddgst": false 00:36:47.155 }, 00:36:47.155 "method": "bdev_nvme_attach_controller" 00:36:47.155 }' 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:47.155 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:47.156 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:47.156 07:35:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.725 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:47.725 ... 00:36:47.725 fio-3.35 00:36:47.725 Starting 3 threads 00:36:53.015 00:36:53.015 filename0: (groupid=0, jobs=1): err= 0: pid=3827108: Wed Nov 20 07:35:15 2024 00:36:53.015 read: IOPS=302, BW=37.8MiB/s (39.6MB/s)(189MiB/5004msec) 00:36:53.015 slat (nsec): min=5514, max=32505, avg=8139.88, stdev=1641.51 00:36:53.015 clat (usec): min=3739, max=90928, avg=9916.66, stdev=11533.43 00:36:53.015 lat (usec): min=3750, max=90937, avg=9924.80, stdev=11533.49 00:36:53.015 clat percentiles (usec): 00:36:53.015 | 1.00th=[ 4359], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6128], 00:36:53.015 | 30.00th=[ 6521], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7308], 00:36:53.015 | 70.00th=[ 7635], 80.00th=[ 8094], 90.00th=[ 8979], 95.00th=[46400], 00:36:53.015 | 99.00th=[49546], 99.50th=[87557], 99.90th=[89654], 99.95th=[90702], 00:36:53.015 | 99.99th=[90702] 00:36:53.015 bw ( KiB/s): min=27136, max=52224, per=33.41%, avg=38656.00, stdev=7487.00, samples=10 00:36:53.015 iops : min= 212, max= 408, avg=302.00, stdev=58.49, samples=10 00:36:53.015 lat (msec) : 4=0.13%, 10=92.92%, 20=0.20%, 50=6.08%, 100=0.66% 00:36:53.015 cpu : usr=95.88%, sys=3.86%, ctx=7, majf=0, minf=163 00:36:53.015 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:53.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.015 issued rwts: total=1512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:53.015 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:53.015 filename0: (groupid=0, jobs=1): err= 0: pid=3827109: Wed Nov 20 07:35:15 2024 00:36:53.015 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(186MiB/5045msec) 00:36:53.015 slat (nsec): min=5535, max=33982, avg=8708.54, stdev=1409.84 00:36:53.015 clat (usec): min=3995, max=88977, avg=10111.27, stdev=9356.57 00:36:53.015 lat (usec): min=4001, max=88986, avg=10119.98, stdev=9356.72 00:36:53.015 clat percentiles (usec): 00:36:53.015 | 1.00th=[ 4817], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 6849], 00:36:53.015 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 8160], 60.00th=[ 8717], 00:36:53.015 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[11863], 00:36:53.015 | 99.00th=[49021], 99.50th=[56361], 99.90th=[87557], 99.95th=[88605], 00:36:53.015 | 99.99th=[88605] 00:36:53.015 bw ( KiB/s): min=29440, max=46080, per=32.95%, avg=38118.40, stdev=6901.40, samples=10 00:36:53.015 iops : min= 230, max= 360, avg=297.80, stdev=53.92, samples=10 00:36:53.015 lat (msec) : 4=0.07%, 10=84.04%, 20=11.27%, 50=3.89%, 100=0.74% 00:36:53.015 cpu : usr=94.29%, sys=5.47%, ctx=8, majf=0, minf=149 00:36:53.015 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:53.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.015 issued rwts: total=1491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:53.015 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:53.015 filename0: (groupid=0, jobs=1): err= 0: pid=3827110: Wed Nov 20 07:35:15 2024 00:36:53.015 read: IOPS=311, BW=38.9MiB/s (40.8MB/s)(195MiB/5004msec) 00:36:53.015 slat (nsec): min=5565, max=34112, avg=8440.63, stdev=1950.05 00:36:53.015 clat (usec): min=3789, max=88058, avg=9629.59, stdev=7355.33 00:36:53.015 lat (usec): min=3796, max=88064, avg=9638.03, stdev=7355.36 00:36:53.015 clat percentiles (usec): 00:36:53.015 | 1.00th=[ 4621], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 6980], 00:36:53.015 | 30.00th=[ 7635], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9110], 00:36:53.015 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10945], 95.00th=[11731], 00:36:53.015 | 99.00th=[47973], 99.50th=[50070], 99.90th=[87557], 99.95th=[87557], 00:36:53.015 | 99.99th=[87557] 00:36:53.015 bw ( KiB/s): min=25088, max=54016, per=34.41%, avg=39808.00, stdev=8576.96, samples=10 00:36:53.015 iops : min= 196, max= 422, avg=311.00, stdev=67.01, samples=10 00:36:53.015 lat (msec) : 4=0.06%, 10=75.27%, 20=22.03%, 50=2.25%, 100=0.39% 00:36:53.015 cpu : usr=94.00%, sys=5.56%, ctx=176, majf=0, minf=70 00:36:53.015 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:53.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.015 issued rwts: total=1557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:53.015 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:53.015 00:36:53.015 Run status group 0 (all jobs): 00:36:53.015 READ: bw=113MiB/s (118MB/s), 36.9MiB/s-38.9MiB/s (38.7MB/s-40.8MB/s), io=570MiB (598MB), run=5004-5045msec 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.278 bdev_null0 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.278 [2024-11-20 07:35:15.418634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.278 bdev_null1 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.278 bdev_null2 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:53.278 { 00:36:53.278 "params": { 00:36:53.278 "name": "Nvme$subsystem", 00:36:53.278 "trtype": "$TEST_TRANSPORT", 00:36:53.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.278 "adrfam": "ipv4", 00:36:53.278 "trsvcid": "$NVMF_PORT", 00:36:53.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.278 "hdgst": ${hdgst:-false}, 00:36:53.278 "ddgst": ${ddgst:-false} 00:36:53.278 }, 00:36:53.278 "method": "bdev_nvme_attach_controller" 00:36:53.278 } 00:36:53.278 EOF 00:36:53.278 )") 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:53.278 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:53.279 { 00:36:53.279 "params": { 00:36:53.279 "name": "Nvme$subsystem", 00:36:53.279 "trtype": "$TEST_TRANSPORT", 00:36:53.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.279 "adrfam": "ipv4", 00:36:53.279 "trsvcid": "$NVMF_PORT", 00:36:53.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.279 "hdgst": ${hdgst:-false}, 00:36:53.279 "ddgst": ${ddgst:-false} 00:36:53.279 }, 00:36:53.279 "method": "bdev_nvme_attach_controller" 00:36:53.279 } 00:36:53.279 EOF 00:36:53.279 )") 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:53.279 { 00:36:53.279 "params": { 00:36:53.279 "name": "Nvme$subsystem", 00:36:53.279 "trtype": "$TEST_TRANSPORT", 00:36:53.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.279 "adrfam": "ipv4", 00:36:53.279 "trsvcid": "$NVMF_PORT", 00:36:53.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.279 "hdgst": ${hdgst:-false}, 00:36:53.279 "ddgst": ${ddgst:-false} 00:36:53.279 }, 00:36:53.279 "method": "bdev_nvme_attach_controller" 00:36:53.279 } 00:36:53.279 EOF 00:36:53.279 )") 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:53.279 07:35:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:53.279 "params": { 00:36:53.279 "name": "Nvme0", 00:36:53.279 "trtype": "tcp", 00:36:53.279 "traddr": "10.0.0.2", 00:36:53.279 "adrfam": "ipv4", 00:36:53.279 "trsvcid": "4420", 00:36:53.279 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:53.279 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:53.279 "hdgst": false, 00:36:53.279 "ddgst": false 00:36:53.279 }, 00:36:53.279 "method": "bdev_nvme_attach_controller" 00:36:53.279 },{ 00:36:53.279 "params": { 00:36:53.279 "name": "Nvme1", 00:36:53.279 "trtype": "tcp", 00:36:53.279 "traddr": "10.0.0.2", 00:36:53.279 "adrfam": "ipv4", 00:36:53.279 "trsvcid": "4420", 00:36:53.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:53.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:53.279 "hdgst": false, 00:36:53.279 "ddgst": false 00:36:53.279 }, 00:36:53.279 "method": "bdev_nvme_attach_controller" 00:36:53.279 },{ 00:36:53.279 "params": { 00:36:53.279 "name": "Nvme2", 00:36:53.279 "trtype": "tcp", 00:36:53.279 "traddr": "10.0.0.2", 00:36:53.279 "adrfam": "ipv4", 00:36:53.279 "trsvcid": "4420", 00:36:53.279 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:53.279 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:53.279 "hdgst": false, 00:36:53.279 "ddgst": false 00:36:53.279 }, 00:36:53.279 "method": "bdev_nvme_attach_controller" 00:36:53.279 }' 00:36:53.540 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:53.540 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:53.540 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:53.540 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.540 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:53.540 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:53.540 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:53.540 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:53.540 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:53.540 07:35:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.809 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:53.809 ... 00:36:53.809 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:53.809 ... 00:36:53.809 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:53.809 ... 00:36:53.809 fio-3.35 00:36:53.809 Starting 24 threads 00:37:06.046 00:37:06.046 filename0: (groupid=0, jobs=1): err= 0: pid=3828550: Wed Nov 20 07:35:26 2024 00:37:06.046 read: IOPS=681, BW=2728KiB/s (2793kB/s)(26.7MiB/10018msec) 00:37:06.046 slat (usec): min=5, max=119, avg=13.28, stdev=12.19 00:37:06.046 clat (msec): min=2, max=359, avg=23.36, stdev=21.96 00:37:06.046 lat (msec): min=2, max=360, avg=23.37, stdev=21.96 00:37:06.046 clat percentiles (msec): 00:37:06.046 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 17], 20.00th=[ 21], 00:37:06.046 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.046 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 27], 00:37:06.046 | 99.00th=[ 34], 99.50th=[ 39], 99.90th=[ 355], 99.95th=[ 359], 00:37:06.046 | 99.99th=[ 359] 00:37:06.046 bw ( KiB/s): min= 128, max= 3760, per=4.36%, avg=2712.42, stdev=698.82, samples=19 00:37:06.046 iops : min= 32, max= 940, avg=678.11, stdev=174.70, samples=19 00:37:06.046 lat (msec) : 4=1.30%, 10=1.42%, 20=15.57%, 50=81.24%, 500=0.47% 00:37:06.046 cpu : usr=98.80%, sys=0.86%, ctx=15, majf=0, minf=60 00:37:06.046 IO depths : 1=3.1%, 2=6.4%, 4=15.9%, 8=64.9%, 16=9.6%, 32=0.0%, >=64=0.0% 00:37:06.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.046 complete : 0=0.0%, 4=91.6%, 8=3.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.046 issued rwts: total=6832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.046 filename0: (groupid=0, jobs=1): err= 0: pid=3828551: Wed Nov 20 07:35:26 2024 00:37:06.046 read: IOPS=638, BW=2555KiB/s (2617kB/s)(25.1MiB/10049msec) 00:37:06.046 slat (usec): min=4, max=141, avg=16.44, stdev=15.09 00:37:06.046 clat (msec): min=7, max=387, avg=24.92, stdev=20.54 00:37:06.046 lat (msec): min=7, max=387, avg=24.94, stdev=20.54 00:37:06.046 clat percentiles (msec): 00:37:06.046 | 1.00th=[ 13], 5.00th=[ 18], 10.00th=[ 21], 20.00th=[ 23], 00:37:06.046 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:06.046 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 28], 95.00th=[ 31], 00:37:06.046 | 99.00th=[ 39], 99.50th=[ 140], 99.90th=[ 326], 99.95th=[ 388], 00:37:06.046 | 99.99th=[ 388] 00:37:06.046 bw ( KiB/s): min= 208, max= 2832, per=4.11%, avg=2561.75, stdev=574.72, samples=20 00:37:06.046 iops : min= 52, max= 708, avg=640.40, stdev=143.67, samples=20 00:37:06.046 lat (msec) : 10=0.28%, 20=8.57%, 50=90.40%, 100=0.09%, 250=0.31% 00:37:06.046 lat (msec) : 500=0.34% 00:37:06.046 cpu : usr=98.90%, sys=0.78%, ctx=15, majf=0, minf=59 00:37:06.046 IO depths : 1=1.7%, 2=3.4%, 4=9.1%, 8=72.4%, 16=13.3%, 32=0.0%, >=64=0.0% 00:37:06.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.046 complete : 0=0.0%, 4=89.5%, 8=7.3%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.046 issued rwts: total=6420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.046 filename0: (groupid=0, jobs=1): err= 0: pid=3828552: Wed Nov 20 07:35:26 2024 00:37:06.046 read: IOPS=650, BW=2601KiB/s (2663kB/s)(25.4MiB/10015msec) 00:37:06.046 slat (usec): min=5, max=167, avg=29.17, stdev=20.42 00:37:06.046 clat (msec): min=7, max=666, avg=24.36, stdev=25.89 00:37:06.046 lat (msec): min=7, max=666, avg=24.39, stdev=25.89 00:37:06.046 clat percentiles (msec): 00:37:06.046 | 1.00th=[ 14], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 23], 00:37:06.046 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.046 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:37:06.046 | 99.00th=[ 33], 99.50th=[ 39], 99.90th=[ 502], 99.95th=[ 502], 00:37:06.046 | 99.99th=[ 667] 00:37:06.046 bw ( KiB/s): min= 112, max= 3168, per=4.17%, avg=2598.40, stdev=622.55, samples=20 00:37:06.046 iops : min= 28, max= 792, avg=649.60, stdev=155.64, samples=20 00:37:06.046 lat (msec) : 10=0.45%, 20=4.87%, 50=94.23%, 250=0.21%, 750=0.25% 00:37:06.046 cpu : usr=98.86%, sys=0.80%, ctx=17, majf=0, minf=43 00:37:06.046 IO depths : 1=5.6%, 2=11.2%, 4=23.0%, 8=53.2%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:06.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.046 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.046 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.046 filename0: (groupid=0, jobs=1): err= 0: pid=3828553: Wed Nov 20 07:35:26 2024 00:37:06.046 read: IOPS=642, BW=2570KiB/s (2632kB/s)(25.1MiB/10009msec) 00:37:06.046 slat (usec): min=5, max=115, avg=29.33, stdev=19.59 00:37:06.046 clat (msec): min=12, max=668, avg=24.63, stdev=26.00 00:37:06.046 lat (msec): min=12, max=668, avg=24.66, stdev=25.99 00:37:06.046 clat percentiles (msec): 00:37:06.046 | 1.00th=[ 19], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 23], 00:37:06.046 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.046 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:37:06.046 | 99.00th=[ 28], 99.50th=[ 45], 99.90th=[ 502], 99.95th=[ 502], 00:37:06.046 | 99.99th=[ 667] 00:37:06.046 bw ( KiB/s): min= 112, max= 2816, per=4.10%, avg=2553.26, stdev=605.83, samples=19 00:37:06.046 iops : min= 28, max= 704, avg=638.32, stdev=151.46, samples=19 00:37:06.046 lat (msec) : 20=1.31%, 50=98.23%, 250=0.22%, 750=0.25% 00:37:06.046 cpu : usr=98.63%, sys=0.84%, ctx=133, majf=0, minf=46 00:37:06.046 IO depths : 1=5.8%, 2=11.6%, 4=24.0%, 8=51.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:06.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.046 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.046 issued rwts: total=6432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.046 filename0: (groupid=0, jobs=1): err= 0: pid=3828554: Wed Nov 20 07:35:26 2024 00:37:06.046 read: IOPS=653, BW=2615KiB/s (2677kB/s)(25.6MiB/10014msec) 00:37:06.046 slat (usec): min=5, max=161, avg=30.35, stdev=24.93 00:37:06.046 clat (msec): min=6, max=501, avg=24.20, stdev=24.87 00:37:06.046 lat (msec): min=6, max=501, avg=24.23, stdev=24.87 00:37:06.046 clat percentiles (msec): 00:37:06.046 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 22], 20.00th=[ 23], 00:37:06.046 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.046 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:37:06.046 | 99.00th=[ 33], 99.50th=[ 38], 99.90th=[ 502], 99.95th=[ 502], 00:37:06.046 | 99.99th=[ 502] 00:37:06.046 bw ( KiB/s): min= 128, max= 3488, per=4.20%, avg=2612.00, stdev=637.49, samples=20 00:37:06.046 iops : min= 32, max= 872, avg=653.00, stdev=159.37, samples=20 00:37:06.046 lat (msec) : 10=0.35%, 20=6.92%, 50=92.24%, 250=0.24%, 750=0.24% 00:37:06.046 cpu : usr=98.95%, sys=0.72%, ctx=16, majf=0, minf=49 00:37:06.046 IO depths : 1=5.6%, 2=11.3%, 4=23.1%, 8=53.1%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:06.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.046 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.046 issued rwts: total=6546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.047 filename0: (groupid=0, jobs=1): err= 0: pid=3828555: Wed Nov 20 07:35:26 2024 00:37:06.047 read: IOPS=653, BW=2614KiB/s (2677kB/s)(25.7MiB/10049msec) 00:37:06.047 slat (usec): min=4, max=155, avg=22.23, stdev=20.10 00:37:06.047 clat (msec): min=10, max=668, avg=24.28, stdev=26.00 00:37:06.047 lat (msec): min=10, max=668, avg=24.30, stdev=26.00 00:37:06.047 clat percentiles (msec): 00:37:06.047 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 19], 20.00th=[ 23], 00:37:06.047 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.047 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 28], 00:37:06.047 | 99.00th=[ 36], 99.50th=[ 55], 99.90th=[ 502], 99.95th=[ 502], 00:37:06.047 | 99.99th=[ 667] 00:37:06.047 bw ( KiB/s): min= 112, max= 3024, per=4.21%, avg=2620.80, stdev=613.90, samples=20 00:37:06.047 iops : min= 28, max= 756, avg=655.15, stdev=153.45, samples=20 00:37:06.047 lat (msec) : 20=13.38%, 50=86.07%, 100=0.09%, 250=0.21%, 750=0.24% 00:37:06.047 cpu : usr=98.98%, sys=0.69%, ctx=26, majf=0, minf=45 00:37:06.047 IO depths : 1=2.9%, 2=5.9%, 4=13.5%, 8=66.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:37:06.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.047 complete : 0=0.0%, 4=91.2%, 8=4.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.047 issued rwts: total=6568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.047 filename0: (groupid=0, jobs=1): err= 0: pid=3828556: Wed Nov 20 07:35:26 2024 00:37:06.047 read: IOPS=683, BW=2736KiB/s (2802kB/s)(26.7MiB/10009msec) 00:37:06.047 slat (usec): min=5, max=134, avg=18.28, stdev=18.20 00:37:06.047 clat (usec): min=1436, max=384019, avg=23250.76, stdev=20376.40 00:37:06.047 lat (usec): min=1457, max=384034, avg=23269.04, stdev=20376.18 00:37:06.047 clat percentiles (msec): 00:37:06.047 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 17], 20.00th=[ 22], 00:37:06.047 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.047 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:37:06.047 | 99.00th=[ 37], 99.50th=[ 138], 99.90th=[ 334], 99.95th=[ 384], 00:37:06.047 | 99.99th=[ 384] 00:37:06.047 bw ( KiB/s): min= 208, max= 4320, per=4.39%, avg=2734.32, stdev=733.05, samples=19 00:37:06.047 iops : min= 52, max= 1080, avg=683.58, stdev=183.26, samples=19 00:37:06.047 lat (msec) : 2=0.16%, 4=1.94%, 10=1.34%, 20=13.61%, 50=82.33% 00:37:06.047 lat (msec) : 250=0.29%, 500=0.32% 00:37:06.047 cpu : usr=99.02%, sys=0.67%, ctx=41, majf=0, minf=95 00:37:06.047 IO depths : 1=4.4%, 2=8.9%, 4=19.5%, 8=58.8%, 16=8.3%, 32=0.0%, >=64=0.0% 00:37:06.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.047 complete : 0=0.0%, 4=92.6%, 8=1.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.047 issued rwts: total=6846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.047 filename0: (groupid=0, jobs=1): err= 0: pid=3828557: Wed Nov 20 07:35:26 2024 00:37:06.047 read: IOPS=644, BW=2578KiB/s (2640kB/s)(25.2MiB/10004msec) 00:37:06.047 slat (usec): min=5, max=132, avg=19.46, stdev=17.00 00:37:06.047 clat (msec): min=15, max=377, avg=24.66, stdev=19.96 00:37:06.047 lat (msec): min=15, max=377, avg=24.68, stdev=19.96 00:37:06.047 clat percentiles (msec): 00:37:06.047 | 1.00th=[ 19], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:37:06.047 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:06.047 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:37:06.047 | 99.00th=[ 27], 99.50th=[ 138], 99.90th=[ 380], 99.95th=[ 380], 00:37:06.047 | 99.99th=[ 380] 00:37:06.047 bw ( KiB/s): min= 256, max= 2816, per=4.12%, avg=2566.74, stdev=580.69, samples=19 00:37:06.047 iops : min= 64, max= 704, avg=641.68, stdev=145.17, samples=19 00:37:06.047 lat (msec) : 20=1.52%, 50=97.74%, 250=0.50%, 500=0.25% 00:37:06.047 cpu : usr=99.08%, sys=0.60%, ctx=16, majf=0, minf=38 00:37:06.047 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:06.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.047 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.047 issued rwts: total=6448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.047 filename1: (groupid=0, jobs=1): err= 0: pid=3828559: Wed Nov 20 07:35:26 2024 00:37:06.047 read: IOPS=658, BW=2633KiB/s (2696kB/s)(25.7MiB/10004msec) 00:37:06.047 slat (usec): min=4, max=113, avg=25.05, stdev=20.40 00:37:06.047 clat (msec): min=3, max=502, avg=24.09, stdev=23.77 00:37:06.047 lat (msec): min=3, max=502, avg=24.12, stdev=23.77 00:37:06.047 clat percentiles (msec): 00:37:06.047 | 1.00th=[ 13], 5.00th=[ 16], 10.00th=[ 20], 20.00th=[ 23], 00:37:06.047 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.047 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 27], 00:37:06.047 | 99.00th=[ 37], 99.50th=[ 140], 99.90th=[ 502], 99.95th=[ 502], 00:37:06.047 | 99.99th=[ 502] 00:37:06.047 bw ( KiB/s): min= 160, max= 3072, per=4.21%, avg=2618.11, stdev=620.97, samples=19 00:37:06.047 iops : min= 40, max= 768, avg=654.53, stdev=155.24, samples=19 00:37:06.047 lat (msec) : 4=0.21%, 10=0.24%, 20=9.90%, 50=89.06%, 100=0.03% 00:37:06.047 lat (msec) : 250=0.30%, 500=0.06%, 750=0.18% 00:37:06.047 cpu : usr=98.93%, sys=0.75%, ctx=14, majf=0, minf=48 00:37:06.047 IO depths : 1=3.0%, 2=7.0%, 4=17.6%, 8=62.0%, 16=10.5%, 32=0.0%, >=64=0.0% 00:37:06.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.047 complete : 0=0.0%, 4=92.4%, 8=2.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.047 issued rwts: total=6584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.047 filename1: (groupid=0, jobs=1): err= 0: pid=3828560: Wed Nov 20 07:35:26 2024 00:37:06.047 read: IOPS=648, BW=2592KiB/s (2654kB/s)(25.3MiB/10003msec) 00:37:06.047 slat (usec): min=5, max=153, avg=22.72, stdev=21.34 00:37:06.047 clat (msec): min=4, max=424, avg=24.51, stdev=22.45 00:37:06.047 lat (msec): min=4, max=424, avg=24.54, stdev=22.45 00:37:06.047 clat percentiles (msec): 00:37:06.047 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 19], 20.00th=[ 23], 00:37:06.047 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.047 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 27], 95.00th=[ 30], 00:37:06.047 | 99.00th=[ 40], 99.50th=[ 78], 99.90th=[ 422], 99.95th=[ 426], 00:37:06.047 | 99.99th=[ 426] 00:37:06.047 bw ( KiB/s): min= 128, max= 2976, per=4.16%, avg=2588.05, stdev=619.89, samples=19 00:37:06.047 iops : min= 32, max= 744, avg=647.00, stdev=154.97, samples=19 00:37:06.047 lat (msec) : 10=0.34%, 20=14.44%, 50=84.63%, 100=0.09%, 250=0.15% 00:37:06.047 lat (msec) : 500=0.34% 00:37:06.047 cpu : usr=98.93%, sys=0.72%, ctx=49, majf=0, minf=44 00:37:06.047 IO depths : 1=2.2%, 2=4.5%, 4=11.5%, 8=69.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:37:06.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.047 complete : 0=0.0%, 4=90.8%, 8=5.4%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.047 issued rwts: total=6482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.047 filename1: (groupid=0, jobs=1): err= 0: pid=3828561: Wed Nov 20 07:35:26 2024 00:37:06.047 read: IOPS=635, BW=2542KiB/s (2603kB/s)(24.9MiB/10042msec) 00:37:06.047 slat (usec): min=5, max=137, avg=17.90, stdev=18.42 00:37:06.047 clat (msec): min=7, max=667, avg=25.01, stdev=26.45 00:37:06.047 lat (msec): min=7, max=667, avg=25.02, stdev=26.45 00:37:06.047 clat percentiles (msec): 00:37:06.047 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 19], 20.00th=[ 22], 00:37:06.047 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:06.047 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 29], 95.00th=[ 32], 00:37:06.047 | 99.00th=[ 39], 99.50th=[ 59], 99.90th=[ 502], 99.95th=[ 502], 00:37:06.047 | 99.99th=[ 667] 00:37:06.047 bw ( KiB/s): min= 112, max= 2976, per=4.10%, avg=2551.45, stdev=599.42, samples=20 00:37:06.047 iops : min= 28, max= 744, avg=637.85, stdev=149.86, samples=20 00:37:06.047 lat (msec) : 10=0.08%, 20=13.32%, 50=85.99%, 100=0.14%, 250=0.22% 00:37:06.047 lat (msec) : 750=0.25% 00:37:06.047 cpu : usr=98.78%, sys=0.91%, ctx=14, majf=0, minf=31 00:37:06.047 IO depths : 1=0.6%, 2=1.2%, 4=5.9%, 8=77.7%, 16=14.6%, 32=0.0%, >=64=0.0% 00:37:06.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.047 complete : 0=0.0%, 4=89.4%, 8=7.5%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.047 issued rwts: total=6382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.047 filename1: (groupid=0, jobs=1): err= 0: pid=3828562: Wed Nov 20 07:35:26 2024 00:37:06.047 read: IOPS=634, BW=2539KiB/s (2600kB/s)(24.8MiB/10002msec) 00:37:06.047 slat (usec): min=5, max=144, avg=18.58, stdev=18.02 00:37:06.047 clat (msec): min=7, max=506, avg=25.11, stdev=23.35 00:37:06.047 lat (msec): min=7, max=506, avg=25.13, stdev=23.35 00:37:06.047 clat percentiles (msec): 00:37:06.047 | 1.00th=[ 14], 5.00th=[ 18], 10.00th=[ 20], 20.00th=[ 23], 00:37:06.047 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:06.047 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 28], 95.00th=[ 31], 00:37:06.047 | 99.00th=[ 42], 99.50th=[ 171], 99.90th=[ 435], 99.95th=[ 506], 00:37:06.047 | 99.99th=[ 506] 00:37:06.047 bw ( KiB/s): min= 128, max= 2896, per=4.05%, avg=2522.37, stdev=601.10, samples=19 00:37:06.047 iops : min= 32, max= 724, avg=630.58, stdev=150.28, samples=19 00:37:06.047 lat (msec) : 10=0.08%, 20=10.33%, 50=88.74%, 100=0.35%, 250=0.16% 00:37:06.047 lat (msec) : 500=0.28%, 750=0.06% 00:37:06.047 cpu : usr=99.14%, sys=0.53%, ctx=19, majf=0, minf=46 00:37:06.048 IO depths : 1=0.5%, 2=1.1%, 4=5.5%, 8=77.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:37:06.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.048 complete : 0=0.0%, 4=89.7%, 8=7.8%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.048 issued rwts: total=6350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.048 filename1: (groupid=0, jobs=1): err= 0: pid=3828563: Wed Nov 20 07:35:26 2024 00:37:06.048 read: IOPS=639, BW=2557KiB/s (2618kB/s)(25.1MiB/10047msec) 00:37:06.048 slat (usec): min=4, max=141, avg=31.37, stdev=21.59 00:37:06.048 clat (msec): min=11, max=361, avg=24.66, stdev=22.36 00:37:06.048 lat (msec): min=11, max=361, avg=24.69, stdev=22.36 00:37:06.048 clat percentiles (msec): 00:37:06.048 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:37:06.048 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.048 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 26], 00:37:06.048 | 99.00th=[ 35], 99.50th=[ 54], 99.90th=[ 363], 99.95th=[ 363], 00:37:06.048 | 99.99th=[ 363] 00:37:06.048 bw ( KiB/s): min= 128, max= 3008, per=4.12%, avg=2564.75, stdev=591.98, samples=20 00:37:06.048 iops : min= 32, max= 752, avg=641.15, stdev=147.99, samples=20 00:37:06.048 lat (msec) : 20=4.02%, 50=95.39%, 100=0.09%, 500=0.50% 00:37:06.048 cpu : usr=98.70%, sys=0.80%, ctx=125, majf=0, minf=43 00:37:06.048 IO depths : 1=5.5%, 2=11.0%, 4=22.8%, 8=53.6%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:06.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.048 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.048 issued rwts: total=6422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.048 filename1: (groupid=0, jobs=1): err= 0: pid=3828564: Wed Nov 20 07:35:26 2024 00:37:06.048 read: IOPS=659, BW=2637KiB/s (2700kB/s)(25.8MiB/10014msec) 00:37:06.048 slat (usec): min=5, max=189, avg=22.18, stdev=17.67 00:37:06.048 clat (msec): min=7, max=463, avg=24.08, stdev=22.41 00:37:06.048 lat (msec): min=7, max=463, avg=24.10, stdev=22.41 00:37:06.048 clat percentiles (msec): 00:37:06.048 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 19], 20.00th=[ 23], 00:37:06.048 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.048 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 26], 00:37:06.048 | 99.00th=[ 34], 99.50th=[ 40], 99.90th=[ 359], 99.95th=[ 359], 00:37:06.048 | 99.99th=[ 464] 00:37:06.048 bw ( KiB/s): min= 128, max= 3280, per=4.24%, avg=2638.40, stdev=640.88, samples=20 00:37:06.048 iops : min= 32, max= 820, avg=659.60, stdev=160.22, samples=20 00:37:06.048 lat (msec) : 10=0.41%, 20=11.78%, 50=87.32%, 250=0.03%, 500=0.45% 00:37:06.048 cpu : usr=98.77%, sys=0.89%, ctx=17, majf=0, minf=49 00:37:06.048 IO depths : 1=4.5%, 2=9.0%, 4=19.5%, 8=58.8%, 16=8.2%, 32=0.0%, >=64=0.0% 00:37:06.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.048 complete : 0=0.0%, 4=92.5%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.048 issued rwts: total=6602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.048 filename1: (groupid=0, jobs=1): err= 0: pid=3828565: Wed Nov 20 07:35:26 2024 00:37:06.048 read: IOPS=650, BW=2604KiB/s (2666kB/s)(25.4MiB/10008msec) 00:37:06.048 slat (usec): min=5, max=151, avg=25.31, stdev=18.33 00:37:06.048 clat (msec): min=11, max=666, avg=24.35, stdev=26.01 00:37:06.048 lat (msec): min=11, max=666, avg=24.38, stdev=26.01 00:37:06.048 clat percentiles (msec): 00:37:06.048 | 1.00th=[ 14], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 23], 00:37:06.048 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.048 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:37:06.048 | 99.00th=[ 31], 99.50th=[ 36], 99.90th=[ 502], 99.95th=[ 502], 00:37:06.048 | 99.99th=[ 667] 00:37:06.048 bw ( KiB/s): min= 112, max= 2976, per=4.16%, avg=2587.79, stdev=616.57, samples=19 00:37:06.048 iops : min= 28, max= 744, avg=646.95, stdev=154.14, samples=19 00:37:06.048 lat (msec) : 20=5.56%, 50=93.98%, 250=0.21%, 750=0.25% 00:37:06.048 cpu : usr=98.91%, sys=0.76%, ctx=14, majf=0, minf=45 00:37:06.048 IO depths : 1=5.4%, 2=10.9%, 4=22.8%, 8=53.7%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:06.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.048 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.048 issued rwts: total=6514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.048 filename1: (groupid=0, jobs=1): err= 0: pid=3828566: Wed Nov 20 07:35:26 2024 00:37:06.048 read: IOPS=656, BW=2624KiB/s (2687kB/s)(25.6MiB/10009msec) 00:37:06.048 slat (usec): min=5, max=151, avg=27.16, stdev=20.53 00:37:06.048 clat (msec): min=7, max=376, avg=24.16, stdev=19.95 00:37:06.048 lat (msec): min=7, max=376, avg=24.18, stdev=19.95 00:37:06.048 clat percentiles (msec): 00:37:06.048 | 1.00th=[ 12], 5.00th=[ 19], 10.00th=[ 22], 20.00th=[ 23], 00:37:06.048 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.048 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:37:06.048 | 99.00th=[ 35], 99.50th=[ 138], 99.90th=[ 376], 99.95th=[ 376], 00:37:06.048 | 99.99th=[ 376] 00:37:06.048 bw ( KiB/s): min= 256, max= 3248, per=4.20%, avg=2616.42, stdev=612.53, samples=19 00:37:06.048 iops : min= 64, max= 812, avg=654.11, stdev=153.13, samples=19 00:37:06.048 lat (msec) : 10=0.64%, 20=6.00%, 50=92.63%, 250=0.49%, 500=0.24% 00:37:06.048 cpu : usr=98.95%, sys=0.71%, ctx=17, majf=0, minf=44 00:37:06.048 IO depths : 1=5.6%, 2=11.3%, 4=23.3%, 8=52.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:06.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.048 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.048 issued rwts: total=6566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.048 filename2: (groupid=0, jobs=1): err= 0: pid=3828567: Wed Nov 20 07:35:26 2024 00:37:06.048 read: IOPS=645, BW=2581KiB/s (2643kB/s)(25.2MiB/10016msec) 00:37:06.048 slat (usec): min=5, max=144, avg=18.77, stdev=16.35 00:37:06.048 clat (msec): min=11, max=360, avg=24.64, stdev=22.24 00:37:06.048 lat (msec): min=11, max=360, avg=24.65, stdev=22.24 00:37:06.048 clat percentiles (msec): 00:37:06.048 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:37:06.048 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:06.048 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:37:06.048 | 99.00th=[ 33], 99.50th=[ 36], 99.90th=[ 359], 99.95th=[ 359], 00:37:06.048 | 99.99th=[ 363] 00:37:06.048 bw ( KiB/s): min= 128, max= 2960, per=4.14%, avg=2580.80, stdev=596.01, samples=20 00:37:06.048 iops : min= 32, max= 740, avg=645.20, stdev=149.00, samples=20 00:37:06.048 lat (msec) : 20=3.28%, 50=96.22%, 500=0.50% 00:37:06.048 cpu : usr=98.78%, sys=0.90%, ctx=14, majf=0, minf=32 00:37:06.048 IO depths : 1=5.6%, 2=11.4%, 4=23.6%, 8=52.5%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:06.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.048 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.048 issued rwts: total=6462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.048 filename2: (groupid=0, jobs=1): err= 0: pid=3828568: Wed Nov 20 07:35:26 2024 00:37:06.048 read: IOPS=647, BW=2589KiB/s (2651kB/s)(25.3MiB/10006msec) 00:37:06.048 slat (usec): min=4, max=119, avg=25.50, stdev=19.63 00:37:06.048 clat (msec): min=11, max=362, avg=24.48, stdev=22.42 00:37:06.048 lat (msec): min=11, max=362, avg=24.50, stdev=22.42 00:37:06.048 clat percentiles (msec): 00:37:06.049 | 1.00th=[ 14], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 23], 00:37:06.049 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.049 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:37:06.049 | 99.00th=[ 34], 99.50th=[ 36], 99.90th=[ 363], 99.95th=[ 363], 00:37:06.049 | 99.99th=[ 363] 00:37:06.049 bw ( KiB/s): min= 128, max= 2928, per=4.13%, avg=2571.79, stdev=609.75, samples=19 00:37:06.049 iops : min= 32, max= 732, avg=642.95, stdev=152.44, samples=19 00:37:06.049 lat (msec) : 20=4.80%, 50=94.70%, 500=0.49% 00:37:06.049 cpu : usr=98.96%, sys=0.72%, ctx=22, majf=0, minf=44 00:37:06.049 IO depths : 1=4.9%, 2=10.3%, 4=22.7%, 8=54.3%, 16=7.7%, 32=0.0%, >=64=0.0% 00:37:06.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.049 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.049 issued rwts: total=6476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.049 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.049 filename2: (groupid=0, jobs=1): err= 0: pid=3828569: Wed Nov 20 07:35:26 2024 00:37:06.049 read: IOPS=645, BW=2582KiB/s (2644kB/s)(25.3MiB/10014msec) 00:37:06.049 slat (usec): min=5, max=170, avg=28.55, stdev=21.47 00:37:06.049 clat (msec): min=5, max=360, avg=24.53, stdev=22.25 00:37:06.049 lat (msec): min=5, max=360, avg=24.56, stdev=22.25 00:37:06.049 clat percentiles (msec): 00:37:06.049 | 1.00th=[ 12], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:37:06.049 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:06.049 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:37:06.049 | 99.00th=[ 29], 99.50th=[ 39], 99.90th=[ 359], 99.95th=[ 359], 00:37:06.049 | 99.99th=[ 359] 00:37:06.049 bw ( KiB/s): min= 128, max= 3080, per=4.14%, avg=2579.60, stdev=606.83, samples=20 00:37:06.049 iops : min= 32, max= 770, avg=644.90, stdev=151.71, samples=20 00:37:06.049 lat (msec) : 10=0.45%, 20=2.24%, 50=96.81%, 500=0.49% 00:37:06.049 cpu : usr=98.99%, sys=0.69%, ctx=24, majf=0, minf=45 00:37:06.049 IO depths : 1=5.7%, 2=11.5%, 4=24.0%, 8=51.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:06.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.049 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.049 issued rwts: total=6465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.049 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.049 filename2: (groupid=0, jobs=1): err= 0: pid=3828570: Wed Nov 20 07:35:26 2024 00:37:06.049 read: IOPS=649, BW=2598KiB/s (2660kB/s)(25.4MiB/10014msec) 00:37:06.049 slat (usec): min=5, max=135, avg=14.40, stdev=15.56 00:37:06.049 clat (msec): min=8, max=667, avg=24.51, stdev=25.89 00:37:06.049 lat (msec): min=8, max=667, avg=24.53, stdev=25.89 00:37:06.049 clat percentiles (msec): 00:37:06.049 | 1.00th=[ 14], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:37:06.049 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:06.049 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:37:06.049 | 99.00th=[ 31], 99.50th=[ 35], 99.90th=[ 502], 99.95th=[ 502], 00:37:06.049 | 99.99th=[ 667] 00:37:06.049 bw ( KiB/s): min= 112, max= 3216, per=4.17%, avg=2595.20, stdev=618.89, samples=20 00:37:06.049 iops : min= 28, max= 804, avg=648.80, stdev=154.72, samples=20 00:37:06.049 lat (msec) : 10=0.49%, 20=3.51%, 50=95.54%, 250=0.22%, 750=0.25% 00:37:06.049 cpu : usr=98.94%, sys=0.73%, ctx=14, majf=0, minf=49 00:37:06.049 IO depths : 1=5.9%, 2=11.8%, 4=24.0%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:06.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.049 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.049 issued rwts: total=6504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.049 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.049 filename2: (groupid=0, jobs=1): err= 0: pid=3828572: Wed Nov 20 07:35:26 2024 00:37:06.049 read: IOPS=654, BW=2619KiB/s (2682kB/s)(25.6MiB/10017msec) 00:37:06.049 slat (usec): min=5, max=146, avg=26.95, stdev=19.78 00:37:06.049 clat (msec): min=8, max=420, avg=24.20, stdev=22.02 00:37:06.049 lat (msec): min=8, max=420, avg=24.22, stdev=22.02 00:37:06.049 clat percentiles (msec): 00:37:06.049 | 1.00th=[ 13], 5.00th=[ 18], 10.00th=[ 22], 20.00th=[ 23], 00:37:06.049 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.049 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:37:06.049 | 99.00th=[ 35], 99.50th=[ 74], 99.90th=[ 418], 99.95th=[ 422], 00:37:06.049 | 99.99th=[ 422] 00:37:06.049 bw ( KiB/s): min= 176, max= 2992, per=4.21%, avg=2619.20, stdev=599.43, samples=20 00:37:06.049 iops : min= 44, max= 748, avg=654.80, stdev=149.86, samples=20 00:37:06.049 lat (msec) : 10=0.40%, 20=7.38%, 50=91.64%, 100=0.09%, 250=0.09% 00:37:06.049 lat (msec) : 500=0.40% 00:37:06.049 cpu : usr=98.74%, sys=0.92%, ctx=18, majf=0, minf=64 00:37:06.049 IO depths : 1=5.2%, 2=10.5%, 4=22.1%, 8=54.8%, 16=7.3%, 32=0.0%, >=64=0.0% 00:37:06.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.049 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.049 issued rwts: total=6558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.049 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.049 filename2: (groupid=0, jobs=1): err= 0: pid=3828573: Wed Nov 20 07:35:26 2024 00:37:06.049 read: IOPS=643, BW=2574KiB/s (2636kB/s)(25.1MiB/10003msec) 00:37:06.049 slat (usec): min=5, max=119, avg=22.58, stdev=18.63 00:37:06.049 clat (msec): min=8, max=503, avg=24.71, stdev=24.86 00:37:06.049 lat (msec): min=8, max=503, avg=24.73, stdev=24.86 00:37:06.049 clat percentiles (msec): 00:37:06.049 | 1.00th=[ 14], 5.00th=[ 18], 10.00th=[ 21], 20.00th=[ 23], 00:37:06.049 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:06.049 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 27], 95.00th=[ 30], 00:37:06.049 | 99.00th=[ 38], 99.50th=[ 40], 99.90th=[ 502], 99.95th=[ 502], 00:37:06.049 | 99.99th=[ 502] 00:37:06.049 bw ( KiB/s): min= 128, max= 2832, per=4.12%, avg=2562.53, stdev=608.66, samples=19 00:37:06.049 iops : min= 32, max= 708, avg=640.63, stdev=152.17, samples=19 00:37:06.049 lat (msec) : 10=0.06%, 20=9.97%, 50=89.47%, 250=0.22%, 500=0.06% 00:37:06.049 lat (msec) : 750=0.22% 00:37:06.049 cpu : usr=98.98%, sys=0.71%, ctx=16, majf=0, minf=44 00:37:06.049 IO depths : 1=1.5%, 2=3.7%, 4=11.4%, 8=70.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:37:06.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.049 complete : 0=0.0%, 4=91.1%, 8=5.4%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.049 issued rwts: total=6438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.049 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.049 filename2: (groupid=0, jobs=1): err= 0: pid=3828574: Wed Nov 20 07:35:26 2024 00:37:06.049 read: IOPS=647, BW=2590KiB/s (2653kB/s)(25.3MiB/10012msec) 00:37:06.049 slat (usec): min=5, max=110, avg=17.04, stdev=15.06 00:37:06.049 clat (msec): min=11, max=480, avg=24.55, stdev=22.50 00:37:06.049 lat (msec): min=11, max=480, avg=24.57, stdev=22.50 00:37:06.049 clat percentiles (msec): 00:37:06.049 | 1.00th=[ 14], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:37:06.049 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:06.049 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:37:06.049 | 99.00th=[ 31], 99.50th=[ 39], 99.90th=[ 363], 99.95th=[ 363], 00:37:06.049 | 99.99th=[ 481] 00:37:06.049 bw ( KiB/s): min= 128, max= 2816, per=4.14%, avg=2575.16, stdev=602.37, samples=19 00:37:06.049 iops : min= 32, max= 704, avg=643.79, stdev=150.59, samples=19 00:37:06.049 lat (msec) : 20=3.29%, 50=96.22%, 250=0.03%, 500=0.46% 00:37:06.049 cpu : usr=98.75%, sys=0.86%, ctx=47, majf=0, minf=34 00:37:06.049 IO depths : 1=5.5%, 2=11.4%, 4=24.0%, 8=52.1%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:06.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.049 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.049 issued rwts: total=6484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.049 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.049 filename2: (groupid=0, jobs=1): err= 0: pid=3828575: Wed Nov 20 07:35:26 2024 00:37:06.049 read: IOPS=647, BW=2592KiB/s (2654kB/s)(25.3MiB/10003msec) 00:37:06.049 slat (usec): min=5, max=120, avg=18.55, stdev=16.82 00:37:06.049 clat (msec): min=5, max=625, avg=24.57, stdev=24.75 00:37:06.049 lat (msec): min=5, max=625, avg=24.59, stdev=24.75 00:37:06.049 clat percentiles (msec): 00:37:06.050 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 20], 20.00th=[ 23], 00:37:06.050 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:06.050 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 27], 95.00th=[ 30], 00:37:06.050 | 99.00th=[ 37], 99.50th=[ 41], 99.90th=[ 363], 99.95th=[ 625], 00:37:06.050 | 99.99th=[ 625] 00:37:06.050 bw ( KiB/s): min= 96, max= 2976, per=4.13%, avg=2569.95, stdev=621.37, samples=19 00:37:06.050 iops : min= 24, max= 744, avg=642.47, stdev=155.34, samples=19 00:37:06.050 lat (msec) : 10=0.19%, 20=11.68%, 50=87.64%, 100=0.06%, 500=0.37% 00:37:06.050 lat (msec) : 750=0.06% 00:37:06.050 cpu : usr=98.82%, sys=0.85%, ctx=15, majf=0, minf=32 00:37:06.050 IO depths : 1=1.8%, 2=3.5%, 4=9.7%, 8=72.1%, 16=13.0%, 32=0.0%, >=64=0.0% 00:37:06.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.050 complete : 0=0.0%, 4=90.5%, 8=6.1%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.050 issued rwts: total=6481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.050 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.050 00:37:06.050 Run status group 0 (all jobs): 00:37:06.050 READ: bw=60.8MiB/s (63.7MB/s), 2539KiB/s-2736KiB/s (2600kB/s-2802kB/s), io=611MiB (641MB), run=10002-10049msec 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.050 bdev_null0 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.050 [2024-11-20 07:35:27.262579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.050 bdev_null1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:06.050 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:06.051 { 00:37:06.051 "params": { 00:37:06.051 "name": "Nvme$subsystem", 00:37:06.051 "trtype": "$TEST_TRANSPORT", 00:37:06.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:06.051 "adrfam": "ipv4", 00:37:06.051 "trsvcid": "$NVMF_PORT", 00:37:06.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:06.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:06.051 "hdgst": ${hdgst:-false}, 00:37:06.051 "ddgst": ${ddgst:-false} 00:37:06.051 }, 00:37:06.051 "method": "bdev_nvme_attach_controller" 00:37:06.051 } 00:37:06.051 EOF 00:37:06.051 )") 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:06.051 { 00:37:06.051 "params": { 00:37:06.051 "name": "Nvme$subsystem", 00:37:06.051 "trtype": "$TEST_TRANSPORT", 00:37:06.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:06.051 "adrfam": "ipv4", 00:37:06.051 "trsvcid": "$NVMF_PORT", 00:37:06.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:06.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:06.051 "hdgst": ${hdgst:-false}, 00:37:06.051 "ddgst": ${ddgst:-false} 00:37:06.051 }, 00:37:06.051 "method": "bdev_nvme_attach_controller" 00:37:06.051 } 00:37:06.051 EOF 00:37:06.051 )") 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:06.051 "params": { 00:37:06.051 "name": "Nvme0", 00:37:06.051 "trtype": "tcp", 00:37:06.051 "traddr": "10.0.0.2", 00:37:06.051 "adrfam": "ipv4", 00:37:06.051 "trsvcid": "4420", 00:37:06.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.051 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:06.051 "hdgst": false, 00:37:06.051 "ddgst": false 00:37:06.051 }, 00:37:06.051 "method": "bdev_nvme_attach_controller" 00:37:06.051 },{ 00:37:06.051 "params": { 00:37:06.051 "name": "Nvme1", 00:37:06.051 "trtype": "tcp", 00:37:06.051 "traddr": "10.0.0.2", 00:37:06.051 "adrfam": "ipv4", 00:37:06.051 "trsvcid": "4420", 00:37:06.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:06.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:06.051 "hdgst": false, 00:37:06.051 "ddgst": false 00:37:06.051 }, 00:37:06.051 "method": "bdev_nvme_attach_controller" 00:37:06.051 }' 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:06.051 07:35:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:06.051 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:06.051 ... 00:37:06.051 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:06.051 ... 00:37:06.051 fio-3.35 00:37:06.051 Starting 4 threads 00:37:12.633 00:37:12.633 filename0: (groupid=0, jobs=1): err= 0: pid=3830818: Wed Nov 20 07:35:33 2024 00:37:12.633 read: IOPS=2952, BW=23.1MiB/s (24.2MB/s)(115MiB/5002msec) 00:37:12.633 slat (nsec): min=5458, max=31062, avg=5997.78, stdev=1376.11 00:37:12.633 clat (usec): min=888, max=44094, avg=2693.93, stdev=1062.98 00:37:12.633 lat (usec): min=894, max=44125, avg=2699.93, stdev=1063.20 00:37:12.633 clat percentiles (usec): 00:37:12.633 | 1.00th=[ 1663], 5.00th=[ 2008], 10.00th=[ 2147], 20.00th=[ 2376], 00:37:12.633 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:37:12.633 | 70.00th=[ 2704], 80.00th=[ 2868], 90.00th=[ 3425], 95.00th=[ 3523], 00:37:12.633 | 99.00th=[ 3916], 99.50th=[ 4424], 99.90th=[ 5342], 99.95th=[43779], 00:37:12.633 | 99.99th=[44303] 00:37:12.633 bw ( KiB/s): min=16992, max=26224, per=25.93%, avg=23600.00, stdev=2998.74, samples=9 00:37:12.633 iops : min= 2124, max= 3278, avg=2950.00, stdev=374.84, samples=9 00:37:12.633 lat (usec) : 1000=0.02% 00:37:12.633 lat (msec) : 2=4.37%, 4=94.71%, 10=0.85%, 50=0.05% 00:37:12.633 cpu : usr=96.04%, sys=3.70%, ctx=5, majf=0, minf=27 00:37:12.633 IO depths : 1=0.1%, 2=1.3%, 4=69.2%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:12.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.633 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.633 issued rwts: total=14768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.633 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:12.633 filename0: (groupid=0, jobs=1): err= 0: pid=3830819: Wed Nov 20 07:35:33 2024 00:37:12.633 read: IOPS=2799, BW=21.9MiB/s (22.9MB/s)(109MiB/5001msec) 00:37:12.633 slat (nsec): min=5475, max=47956, avg=7709.39, stdev=2244.55 00:37:12.633 clat (usec): min=1429, max=5636, avg=2836.85, stdev=439.50 00:37:12.633 lat (usec): min=1437, max=5644, avg=2844.56, stdev=439.80 00:37:12.633 clat percentiles (usec): 00:37:12.633 | 1.00th=[ 2008], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2606], 00:37:12.633 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:37:12.633 | 70.00th=[ 2900], 80.00th=[ 3097], 90.00th=[ 3490], 95.00th=[ 3654], 00:37:12.633 | 99.00th=[ 4228], 99.50th=[ 4555], 99.90th=[ 5211], 99.95th=[ 5276], 00:37:12.633 | 99.99th=[ 5604] 00:37:12.633 bw ( KiB/s): min=17984, max=23360, per=24.44%, avg=22248.89, stdev=1929.53, samples=9 00:37:12.633 iops : min= 2248, max= 2920, avg=2781.11, stdev=241.19, samples=9 00:37:12.633 lat (msec) : 2=0.94%, 4=97.08%, 10=1.99% 00:37:12.633 cpu : usr=95.68%, sys=4.04%, ctx=8, majf=0, minf=46 00:37:12.633 IO depths : 1=0.1%, 2=0.5%, 4=71.5%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:12.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.633 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.633 issued rwts: total=14001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.633 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:12.633 filename1: (groupid=0, jobs=1): err= 0: pid=3830820: Wed Nov 20 07:35:33 2024 00:37:12.633 read: IOPS=2821, BW=22.0MiB/s (23.1MB/s)(110MiB/5001msec) 00:37:12.633 slat (nsec): min=5462, max=67228, avg=7242.20, stdev=2304.43 00:37:12.633 clat (usec): min=891, max=6323, avg=2816.43, stdev=404.50 00:37:12.633 lat (usec): min=897, max=6348, avg=2823.67, stdev=404.81 00:37:12.633 clat percentiles (usec): 00:37:12.633 | 1.00th=[ 2024], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2606], 00:37:12.633 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:37:12.633 | 70.00th=[ 2868], 80.00th=[ 2999], 90.00th=[ 3490], 95.00th=[ 3556], 00:37:12.633 | 99.00th=[ 4015], 99.50th=[ 4228], 99.90th=[ 5014], 99.95th=[ 5342], 00:37:12.633 | 99.99th=[ 6259] 00:37:12.633 bw ( KiB/s): min=18016, max=23776, per=24.64%, avg=22432.00, stdev=1996.48, samples=9 00:37:12.633 iops : min= 2252, max= 2972, avg=2804.00, stdev=249.56, samples=9 00:37:12.633 lat (usec) : 1000=0.01% 00:37:12.633 lat (msec) : 2=0.82%, 4=98.14%, 10=1.02% 00:37:12.633 cpu : usr=95.74%, sys=3.98%, ctx=6, majf=0, minf=37 00:37:12.633 IO depths : 1=0.1%, 2=0.2%, 4=71.6%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:12.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.633 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.633 issued rwts: total=14108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.633 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:12.633 filename1: (groupid=0, jobs=1): err= 0: pid=3830821: Wed Nov 20 07:35:33 2024 00:37:12.633 read: IOPS=2807, BW=21.9MiB/s (23.0MB/s)(110MiB/5001msec) 00:37:12.633 slat (nsec): min=5465, max=53656, avg=6135.05, stdev=1862.64 00:37:12.633 clat (usec): min=1085, max=5124, avg=2833.90, stdev=453.73 00:37:12.633 lat (usec): min=1101, max=5130, avg=2840.03, stdev=453.63 00:37:12.633 clat percentiles (usec): 00:37:12.634 | 1.00th=[ 1975], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2573], 00:37:12.634 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:37:12.634 | 70.00th=[ 2868], 80.00th=[ 3130], 90.00th=[ 3523], 95.00th=[ 3785], 00:37:12.634 | 99.00th=[ 4228], 99.50th=[ 4424], 99.90th=[ 4752], 99.95th=[ 5080], 00:37:12.634 | 99.99th=[ 5145] 00:37:12.634 bw ( KiB/s): min=18736, max=23680, per=24.55%, avg=22350.22, stdev=1724.13, samples=9 00:37:12.634 iops : min= 2342, max= 2960, avg=2793.78, stdev=215.52, samples=9 00:37:12.634 lat (msec) : 2=1.18%, 4=96.49%, 10=2.34% 00:37:12.634 cpu : usr=95.78%, sys=3.96%, ctx=7, majf=0, minf=68 00:37:12.634 IO depths : 1=0.1%, 2=0.4%, 4=70.8%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:12.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.634 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.634 issued rwts: total=14038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.634 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:12.634 00:37:12.634 Run status group 0 (all jobs): 00:37:12.634 READ: bw=88.9MiB/s (93.2MB/s), 21.9MiB/s-23.1MiB/s (22.9MB/s-24.2MB/s), io=445MiB (466MB), run=5001-5002msec 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.634 00:37:12.634 real 0m24.618s 00:37:12.634 user 5m21.236s 00:37:12.634 sys 0m4.519s 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:12.634 07:35:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:12.634 ************************************ 00:37:12.634 END TEST fio_dif_rand_params 00:37:12.634 ************************************ 00:37:12.634 07:35:33 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:12.634 07:35:33 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:12.634 07:35:33 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:12.634 07:35:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:12.634 ************************************ 00:37:12.634 START TEST fio_dif_digest 00:37:12.634 ************************************ 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:12.634 bdev_null0 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:12.634 [2024-11-20 07:35:33.967945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:12.634 { 00:37:12.634 "params": { 00:37:12.634 "name": "Nvme$subsystem", 00:37:12.634 "trtype": "$TEST_TRANSPORT", 00:37:12.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:12.634 "adrfam": "ipv4", 00:37:12.634 "trsvcid": "$NVMF_PORT", 00:37:12.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:12.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:12.634 "hdgst": ${hdgst:-false}, 00:37:12.634 "ddgst": ${ddgst:-false} 00:37:12.634 }, 00:37:12.634 "method": "bdev_nvme_attach_controller" 00:37:12.634 } 00:37:12.634 EOF 00:37:12.634 )") 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:12.634 07:35:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:12.634 "params": { 00:37:12.634 "name": "Nvme0", 00:37:12.634 "trtype": "tcp", 00:37:12.634 "traddr": "10.0.0.2", 00:37:12.634 "adrfam": "ipv4", 00:37:12.634 "trsvcid": "4420", 00:37:12.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:12.634 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:12.634 "hdgst": true, 00:37:12.634 "ddgst": true 00:37:12.634 }, 00:37:12.635 "method": "bdev_nvme_attach_controller" 00:37:12.635 }' 00:37:12.635 07:35:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:12.635 07:35:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:12.635 07:35:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:12.635 07:35:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:12.635 07:35:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:12.635 07:35:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:12.635 07:35:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:12.635 07:35:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:12.635 07:35:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:12.635 07:35:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:12.635 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:12.635 ... 00:37:12.635 fio-3.35 00:37:12.635 Starting 3 threads 00:37:24.867 00:37:24.867 filename0: (groupid=0, jobs=1): err= 0: pid=3832308: Wed Nov 20 07:35:44 2024 00:37:24.867 read: IOPS=276, BW=34.5MiB/s (36.2MB/s)(347MiB/10045msec) 00:37:24.867 slat (nsec): min=5850, max=44651, avg=8603.27, stdev=2280.58 00:37:24.867 clat (usec): min=6844, max=48435, avg=10841.12, stdev=1299.43 00:37:24.867 lat (usec): min=6853, max=48451, avg=10849.72, stdev=1299.43 00:37:24.867 clat percentiles (usec): 00:37:24.867 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:37:24.867 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:37:24.867 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:37:24.867 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14353], 99.95th=[45351], 00:37:24.867 | 99.99th=[48497] 00:37:24.867 bw ( KiB/s): min=33536, max=37376, per=29.76%, avg=35468.80, stdev=904.05, samples=20 00:37:24.867 iops : min= 262, max= 292, avg=277.10, stdev= 7.06, samples=20 00:37:24.867 lat (msec) : 10=15.98%, 20=83.95%, 50=0.07% 00:37:24.867 cpu : usr=90.00%, sys=7.14%, ctx=897, majf=0, minf=163 00:37:24.867 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:24.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.867 issued rwts: total=2773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:24.867 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:24.867 filename0: (groupid=0, jobs=1): err= 0: pid=3832309: Wed Nov 20 07:35:44 2024 00:37:24.867 read: IOPS=302, BW=37.8MiB/s (39.6MB/s)(380MiB/10043msec) 00:37:24.867 slat (nsec): min=8306, max=42074, avg=9138.23, stdev=1052.86 00:37:24.867 clat (usec): min=5933, max=50217, avg=9896.49, stdev=1339.15 00:37:24.867 lat (usec): min=5942, max=50226, avg=9905.63, stdev=1339.12 00:37:24.867 clat percentiles (usec): 00:37:24.868 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9110], 00:37:24.868 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:37:24.868 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11338], 00:37:24.868 | 99.00th=[11994], 99.50th=[12256], 99.90th=[13042], 99.95th=[47973], 00:37:24.868 | 99.99th=[50070] 00:37:24.868 bw ( KiB/s): min=36096, max=43264, per=32.60%, avg=38848.00, stdev=1945.87, samples=20 00:37:24.868 iops : min= 282, max= 338, avg=303.50, stdev=15.20, samples=20 00:37:24.868 lat (msec) : 10=56.44%, 20=43.50%, 50=0.03%, 100=0.03% 00:37:24.868 cpu : usr=95.05%, sys=4.71%, ctx=22, majf=0, minf=169 00:37:24.868 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:24.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.868 issued rwts: total=3037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:24.868 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:24.868 filename0: (groupid=0, jobs=1): err= 0: pid=3832310: Wed Nov 20 07:35:44 2024 00:37:24.868 read: IOPS=352, BW=44.1MiB/s (46.2MB/s)(443MiB/10046msec) 00:37:24.868 slat (nsec): min=5812, max=31389, avg=7539.94, stdev=1551.91 00:37:24.868 clat (usec): min=6636, max=51602, avg=8485.09, stdev=1676.90 00:37:24.868 lat (usec): min=6645, max=51608, avg=8492.63, stdev=1676.86 00:37:24.868 clat percentiles (usec): 00:37:24.868 | 1.00th=[ 7111], 5.00th=[ 7439], 10.00th=[ 7635], 20.00th=[ 7898], 00:37:24.868 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:37:24.868 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9241], 95.00th=[ 9503], 00:37:24.868 | 99.00th=[ 9896], 99.50th=[10290], 99.90th=[48497], 99.95th=[51119], 00:37:24.868 | 99.99th=[51643] 00:37:24.868 bw ( KiB/s): min=40448, max=47104, per=38.03%, avg=45324.80, stdev=1525.80, samples=20 00:37:24.868 iops : min= 316, max= 368, avg=354.10, stdev=11.92, samples=20 00:37:24.868 lat (msec) : 10=99.18%, 20=0.68%, 50=0.06%, 100=0.08% 00:37:24.868 cpu : usr=95.66%, sys=4.11%, ctx=16, majf=0, minf=83 00:37:24.868 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:24.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.868 issued rwts: total=3543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:24.868 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:24.868 00:37:24.868 Run status group 0 (all jobs): 00:37:24.868 READ: bw=116MiB/s (122MB/s), 34.5MiB/s-44.1MiB/s (36.2MB/s-46.2MB/s), io=1169MiB (1226MB), run=10043-10046msec 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.868 00:37:24.868 real 0m11.173s 00:37:24.868 user 0m44.312s 00:37:24.868 sys 0m1.939s 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:24.868 07:35:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:24.868 ************************************ 00:37:24.868 END TEST fio_dif_digest 00:37:24.868 ************************************ 00:37:24.868 07:35:45 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:24.868 07:35:45 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:24.868 07:35:45 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:24.868 07:35:45 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:24.868 07:35:45 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:24.868 07:35:45 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:24.868 07:35:45 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:24.868 07:35:45 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:24.868 rmmod nvme_tcp 00:37:24.868 rmmod nvme_fabrics 00:37:24.868 rmmod nvme_keyring 00:37:24.868 07:35:45 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:24.868 07:35:45 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:24.868 07:35:45 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:24.868 07:35:45 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3821864 ']' 00:37:24.868 07:35:45 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3821864 00:37:24.868 07:35:45 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 3821864 ']' 00:37:24.868 07:35:45 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 3821864 00:37:24.868 07:35:45 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:37:24.868 07:35:45 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:24.868 07:35:45 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3821864 00:37:24.868 07:35:45 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:24.868 07:35:45 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:24.868 07:35:45 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3821864' 00:37:24.868 killing process with pid 3821864 00:37:24.868 07:35:45 nvmf_dif -- common/autotest_common.sh@971 -- # kill 3821864 00:37:24.868 07:35:45 nvmf_dif -- common/autotest_common.sh@976 -- # wait 3821864 00:37:24.868 07:35:45 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:24.868 07:35:45 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:26.785 Waiting for block devices as requested 00:37:26.785 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:26.785 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:26.785 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:26.785 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:27.046 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:27.046 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:27.046 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:27.308 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:27.308 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:27.569 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:27.569 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:27.569 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:27.831 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:27.831 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:27.831 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:28.092 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:28.092 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:28.352 07:35:50 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:28.352 07:35:50 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:28.352 07:35:50 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:28.352 07:35:50 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:28.352 07:35:50 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:28.352 07:35:50 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:28.352 07:35:50 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:28.352 07:35:50 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:28.352 07:35:50 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:28.352 07:35:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:28.352 07:35:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.897 07:35:52 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:30.897 00:37:30.897 real 1m18.769s 00:37:30.897 user 8m9.248s 00:37:30.897 sys 0m22.363s 00:37:30.897 07:35:52 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:30.897 07:35:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:30.897 ************************************ 00:37:30.897 END TEST nvmf_dif 00:37:30.897 ************************************ 00:37:30.897 07:35:52 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:30.897 07:35:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:30.897 07:35:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:30.897 07:35:52 -- common/autotest_common.sh@10 -- # set +x 00:37:30.897 ************************************ 00:37:30.897 START TEST nvmf_abort_qd_sizes 00:37:30.897 ************************************ 00:37:30.897 07:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:30.897 * Looking for test storage... 00:37:30.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:30.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.898 --rc genhtml_branch_coverage=1 00:37:30.898 --rc genhtml_function_coverage=1 00:37:30.898 --rc genhtml_legend=1 00:37:30.898 --rc geninfo_all_blocks=1 00:37:30.898 --rc geninfo_unexecuted_blocks=1 00:37:30.898 00:37:30.898 ' 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:30.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.898 --rc genhtml_branch_coverage=1 00:37:30.898 --rc genhtml_function_coverage=1 00:37:30.898 --rc genhtml_legend=1 00:37:30.898 --rc geninfo_all_blocks=1 00:37:30.898 --rc geninfo_unexecuted_blocks=1 00:37:30.898 00:37:30.898 ' 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:30.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.898 --rc genhtml_branch_coverage=1 00:37:30.898 --rc genhtml_function_coverage=1 00:37:30.898 --rc genhtml_legend=1 00:37:30.898 --rc geninfo_all_blocks=1 00:37:30.898 --rc geninfo_unexecuted_blocks=1 00:37:30.898 00:37:30.898 ' 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:30.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.898 --rc genhtml_branch_coverage=1 00:37:30.898 --rc genhtml_function_coverage=1 00:37:30.898 --rc genhtml_legend=1 00:37:30.898 --rc geninfo_all_blocks=1 00:37:30.898 --rc geninfo_unexecuted_blocks=1 00:37:30.898 00:37:30.898 ' 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:30.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:30.898 07:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:39.044 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:39.044 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:39.044 07:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:39.044 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:39.044 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:39.044 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:39.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:39.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:37:39.045 00:37:39.045 --- 10.0.0.2 ping statistics --- 00:37:39.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:39.045 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:39.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:39.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:37:39.045 00:37:39.045 --- 10.0.0.1 ping statistics --- 00:37:39.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:39.045 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:39.045 07:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:41.592 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:41.592 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:41.853 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3841871 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3841871 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 3841871 ']' 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:42.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:42.114 07:36:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:42.114 [2024-11-20 07:36:04.355703] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:37:42.114 [2024-11-20 07:36:04.355760] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:42.375 [2024-11-20 07:36:04.454642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:42.375 [2024-11-20 07:36:04.509070] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:42.375 [2024-11-20 07:36:04.509119] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:42.375 [2024-11-20 07:36:04.509127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:42.375 [2024-11-20 07:36:04.509135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:42.376 [2024-11-20 07:36:04.509141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:42.376 [2024-11-20 07:36:04.511184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:42.376 [2024-11-20 07:36:04.511310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:42.376 [2024-11-20 07:36:04.511472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.376 [2024-11-20 07:36:04.511474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:42.948 07:36:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:42.948 07:36:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:37:42.948 07:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:42.948 07:36:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:42.948 07:36:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:42.948 07:36:05 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:42.948 07:36:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:42.948 07:36:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:43.210 07:36:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:43.210 ************************************ 00:37:43.210 START TEST spdk_target_abort 00:37:43.210 ************************************ 00:37:43.210 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:37:43.210 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:43.210 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:43.210 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.210 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.471 spdk_targetn1 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.471 [2024-11-20 07:36:05.584125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.471 [2024-11-20 07:36:05.632450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:43.471 07:36:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:43.733 [2024-11-20 07:36:05.782859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:24 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:43.733 [2024-11-20 07:36:05.782899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0004 p:1 m:0 dnr:0 00:37:43.733 [2024-11-20 07:36:05.788703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:168 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:43.733 [2024-11-20 07:36:05.788726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0017 p:1 m:0 dnr:0 00:37:43.733 [2024-11-20 07:36:05.789534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:232 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:43.733 [2024-11-20 07:36:05.789558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:001e p:1 m:0 dnr:0 00:37:43.733 [2024-11-20 07:36:05.797769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:488 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:43.733 [2024-11-20 07:36:05.797791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0041 p:1 m:0 dnr:0 00:37:43.733 [2024-11-20 07:36:05.797869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:504 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:43.733 [2024-11-20 07:36:05.797879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0041 p:1 m:0 dnr:0 00:37:43.733 [2024-11-20 07:36:05.853702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2272 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:37:43.733 [2024-11-20 07:36:05.853731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:43.733 [2024-11-20 07:36:05.869678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2792 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:43.733 [2024-11-20 07:36:05.869705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:43.733 [2024-11-20 07:36:05.870529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2832 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:43.733 [2024-11-20 07:36:05.870550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:43.733 [2024-11-20 07:36:05.893757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3568 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:43.733 [2024-11-20 07:36:05.893787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00bf p:0 m:0 dnr:0 00:37:43.733 [2024-11-20 07:36:05.894347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3576 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:43.733 [2024-11-20 07:36:05.894366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00c2 p:0 m:0 dnr:0 00:37:47.034 Initializing NVMe Controllers 00:37:47.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:47.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:47.034 Initialization complete. Launching workers. 00:37:47.034 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10795, failed: 10 00:37:47.034 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2332, failed to submit 8473 00:37:47.034 success 762, unsuccessful 1570, failed 0 00:37:47.034 07:36:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:47.034 07:36:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:47.034 [2024-11-20 07:36:09.202415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:3664 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:37:47.034 [2024-11-20 07:36:09.202455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:00d6 p:0 m:0 dnr:0 00:37:48.416 [2024-11-20 07:36:10.654101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:36960 len:8 PRP1 0x200004e64000 PRP2 0x0 00:37:48.416 [2024-11-20 07:36:10.654132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:0016 p:1 m:0 dnr:0 00:37:48.986 [2024-11-20 07:36:11.175957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:48904 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:37:48.986 [2024-11-20 07:36:11.175981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:00f0 p:0 m:0 dnr:0 00:37:50.370 Initializing NVMe Controllers 00:37:50.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:50.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:50.370 Initialization complete. Launching workers. 00:37:50.370 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8621, failed: 3 00:37:50.370 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1241, failed to submit 7383 00:37:50.370 success 324, unsuccessful 917, failed 0 00:37:50.370 07:36:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:50.370 07:36:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:50.941 [2024-11-20 07:36:13.077826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:159 nsid:1 lba:84280 len:8 PRP1 0x200004ae8000 PRP2 0x0 00:37:50.941 [2024-11-20 07:36:13.077858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:159 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:53.484 Initializing NVMe Controllers 00:37:53.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:53.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:53.484 Initialization complete. Launching workers. 00:37:53.484 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43610, failed: 1 00:37:53.484 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2822, failed to submit 40789 00:37:53.484 success 623, unsuccessful 2199, failed 0 00:37:53.484 07:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:53.484 07:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:53.484 07:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:53.484 07:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:53.484 07:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:53.484 07:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:53.484 07:36:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3841871 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 3841871 ']' 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 3841871 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3841871 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3841871' 00:37:55.411 killing process with pid 3841871 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 3841871 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 3841871 00:37:55.411 00:37:55.411 real 0m12.123s 00:37:55.411 user 0m49.384s 00:37:55.411 sys 0m2.056s 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.411 ************************************ 00:37:55.411 END TEST spdk_target_abort 00:37:55.411 ************************************ 00:37:55.411 07:36:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:55.411 07:36:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:55.411 07:36:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:55.411 07:36:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:55.411 ************************************ 00:37:55.411 START TEST kernel_target_abort 00:37:55.411 ************************************ 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:55.411 07:36:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:58.713 Waiting for block devices as requested 00:37:58.713 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:58.974 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:58.974 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:58.974 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:58.974 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:59.235 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:59.235 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:59.235 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:59.496 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:59.496 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:59.757 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:59.757 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:59.757 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:00.018 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:00.018 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:00.018 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:00.280 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:00.541 No valid GPT data, bailing 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:00.541 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:00.542 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:00.542 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:00.542 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:00.542 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:00.542 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:00.542 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:00.542 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:38:00.802 00:38:00.802 Discovery Log Number of Records 2, Generation counter 2 00:38:00.802 =====Discovery Log Entry 0====== 00:38:00.802 trtype: tcp 00:38:00.802 adrfam: ipv4 00:38:00.802 subtype: current discovery subsystem 00:38:00.802 treq: not specified, sq flow control disable supported 00:38:00.802 portid: 1 00:38:00.802 trsvcid: 4420 00:38:00.802 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:00.802 traddr: 10.0.0.1 00:38:00.802 eflags: none 00:38:00.802 sectype: none 00:38:00.802 =====Discovery Log Entry 1====== 00:38:00.802 trtype: tcp 00:38:00.802 adrfam: ipv4 00:38:00.802 subtype: nvme subsystem 00:38:00.802 treq: not specified, sq flow control disable supported 00:38:00.802 portid: 1 00:38:00.802 trsvcid: 4420 00:38:00.803 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:00.803 traddr: 10.0.0.1 00:38:00.803 eflags: none 00:38:00.803 sectype: none 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:00.803 07:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:04.104 Initializing NVMe Controllers 00:38:04.104 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:04.104 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:04.104 Initialization complete. Launching workers. 00:38:04.104 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67108, failed: 0 00:38:04.104 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67108, failed to submit 0 00:38:04.104 success 0, unsuccessful 67108, failed 0 00:38:04.104 07:36:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:04.104 07:36:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:07.403 Initializing NVMe Controllers 00:38:07.403 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:07.403 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:07.403 Initialization complete. Launching workers. 00:38:07.403 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 114101, failed: 0 00:38:07.403 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28714, failed to submit 85387 00:38:07.403 success 0, unsuccessful 28714, failed 0 00:38:07.403 07:36:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:07.403 07:36:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:09.946 Initializing NVMe Controllers 00:38:09.946 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:09.946 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:09.946 Initialization complete. Launching workers. 00:38:09.946 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145611, failed: 0 00:38:09.946 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36442, failed to submit 109169 00:38:09.946 success 0, unsuccessful 36442, failed 0 00:38:09.946 07:36:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:09.946 07:36:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:09.946 07:36:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:09.946 07:36:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:09.946 07:36:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:10.206 07:36:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:10.206 07:36:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:10.206 07:36:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:10.206 07:36:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:10.206 07:36:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:13.507 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:13.507 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:13.507 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:13.507 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:13.768 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:13.768 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:13.768 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:13.768 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:13.768 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:13.768 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:13.768 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:13.768 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:13.768 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:13.768 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:13.768 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:13.768 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:15.678 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:15.939 00:38:15.939 real 0m20.544s 00:38:15.939 user 0m9.891s 00:38:15.939 sys 0m6.264s 00:38:15.939 07:36:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:15.939 07:36:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:15.939 ************************************ 00:38:15.939 END TEST kernel_target_abort 00:38:15.939 ************************************ 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:15.939 rmmod nvme_tcp 00:38:15.939 rmmod nvme_fabrics 00:38:15.939 rmmod nvme_keyring 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3841871 ']' 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3841871 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 3841871 ']' 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 3841871 00:38:15.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3841871) - No such process 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 3841871 is not found' 00:38:15.939 Process with pid 3841871 is not found 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:15.939 07:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:19.239 Waiting for block devices as requested 00:38:19.499 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:19.499 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:19.499 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:19.760 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:19.760 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:19.760 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:20.020 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:20.020 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:20.020 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:20.280 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:20.280 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:20.542 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:20.542 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:20.542 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:20.805 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:20.805 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:20.805 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:21.066 07:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:21.066 07:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:21.066 07:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:21.066 07:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:21.066 07:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:21.066 07:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:21.066 07:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:21.066 07:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:21.066 07:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:21.066 07:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:21.066 07:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:23.610 07:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:23.610 00:38:23.610 real 0m52.730s 00:38:23.610 user 1m4.705s 00:38:23.610 sys 0m19.561s 00:38:23.610 07:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:23.610 07:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:23.610 ************************************ 00:38:23.610 END TEST nvmf_abort_qd_sizes 00:38:23.610 ************************************ 00:38:23.610 07:36:45 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:23.610 07:36:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:23.610 07:36:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:23.610 07:36:45 -- common/autotest_common.sh@10 -- # set +x 00:38:23.610 ************************************ 00:38:23.610 START TEST keyring_file 00:38:23.610 ************************************ 00:38:23.610 07:36:45 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:23.610 * Looking for test storage... 00:38:23.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:23.610 07:36:45 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:23.610 07:36:45 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:38:23.610 07:36:45 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:23.610 07:36:45 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:23.610 07:36:45 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:23.610 07:36:45 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:23.610 07:36:45 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:23.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.611 --rc genhtml_branch_coverage=1 00:38:23.611 --rc genhtml_function_coverage=1 00:38:23.611 --rc genhtml_legend=1 00:38:23.611 --rc geninfo_all_blocks=1 00:38:23.611 --rc geninfo_unexecuted_blocks=1 00:38:23.611 00:38:23.611 ' 00:38:23.611 07:36:45 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:23.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.611 --rc genhtml_branch_coverage=1 00:38:23.611 --rc genhtml_function_coverage=1 00:38:23.611 --rc genhtml_legend=1 00:38:23.611 --rc geninfo_all_blocks=1 00:38:23.611 --rc geninfo_unexecuted_blocks=1 00:38:23.611 00:38:23.611 ' 00:38:23.611 07:36:45 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:23.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.611 --rc genhtml_branch_coverage=1 00:38:23.611 --rc genhtml_function_coverage=1 00:38:23.611 --rc genhtml_legend=1 00:38:23.611 --rc geninfo_all_blocks=1 00:38:23.611 --rc geninfo_unexecuted_blocks=1 00:38:23.611 00:38:23.611 ' 00:38:23.611 07:36:45 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:23.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.611 --rc genhtml_branch_coverage=1 00:38:23.611 --rc genhtml_function_coverage=1 00:38:23.611 --rc genhtml_legend=1 00:38:23.611 --rc geninfo_all_blocks=1 00:38:23.611 --rc geninfo_unexecuted_blocks=1 00:38:23.611 00:38:23.611 ' 00:38:23.611 07:36:45 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:23.611 07:36:45 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:23.611 07:36:45 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:23.611 07:36:45 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:23.611 07:36:45 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:23.611 07:36:45 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:23.611 07:36:45 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.611 07:36:45 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.611 07:36:45 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.611 07:36:45 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:23.611 07:36:45 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:23.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:23.611 07:36:45 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:23.611 07:36:45 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:23.611 07:36:45 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:23.611 07:36:45 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:23.611 07:36:45 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:23.611 07:36:45 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:23.611 07:36:45 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:23.611 07:36:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:23.611 07:36:45 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:23.611 07:36:45 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:23.611 07:36:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:23.611 07:36:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:23.611 07:36:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZaMFljMB2T 00:38:23.611 07:36:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:23.611 07:36:45 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:23.611 07:36:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZaMFljMB2T 00:38:23.611 07:36:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZaMFljMB2T 00:38:23.612 07:36:45 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ZaMFljMB2T 00:38:23.612 07:36:45 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:23.612 07:36:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:23.612 07:36:45 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:23.612 07:36:45 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:23.612 07:36:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:23.612 07:36:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:23.612 07:36:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0UJK5Hkqbz 00:38:23.612 07:36:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:23.612 07:36:45 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:23.612 07:36:45 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:23.612 07:36:45 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:23.612 07:36:45 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:23.612 07:36:45 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:23.612 07:36:45 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:23.612 07:36:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0UJK5Hkqbz 00:38:23.612 07:36:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0UJK5Hkqbz 00:38:23.612 07:36:45 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.0UJK5Hkqbz 00:38:23.612 07:36:45 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:23.612 07:36:45 keyring_file -- keyring/file.sh@30 -- # tgtpid=3852497 00:38:23.612 07:36:45 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3852497 00:38:23.612 07:36:45 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3852497 ']' 00:38:23.612 07:36:45 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:23.612 07:36:45 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:23.612 07:36:45 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:23.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:23.612 07:36:45 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:23.612 07:36:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:23.872 [2024-11-20 07:36:45.890925] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:38:23.872 [2024-11-20 07:36:45.891002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852497 ] 00:38:23.872 [2024-11-20 07:36:45.982405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.872 [2024-11-20 07:36:46.035518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.444 07:36:46 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:24.444 07:36:46 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:24.444 07:36:46 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:24.444 07:36:46 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.444 07:36:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:24.444 [2024-11-20 07:36:46.717581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:24.705 null0 00:38:24.705 [2024-11-20 07:36:46.749629] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:24.705 [2024-11-20 07:36:46.750187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:24.705 07:36:46 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.705 07:36:46 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:24.705 07:36:46 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:24.705 07:36:46 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:24.705 07:36:46 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:24.705 07:36:46 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:24.705 07:36:46 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:24.705 07:36:46 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:24.705 07:36:46 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:24.705 07:36:46 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.705 07:36:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:24.705 [2024-11-20 07:36:46.781694] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:24.705 request: 00:38:24.705 { 00:38:24.705 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.705 "secure_channel": false, 00:38:24.705 "listen_address": { 00:38:24.705 "trtype": "tcp", 00:38:24.705 "traddr": "127.0.0.1", 00:38:24.705 "trsvcid": "4420" 00:38:24.705 }, 00:38:24.706 "method": "nvmf_subsystem_add_listener", 00:38:24.706 "req_id": 1 00:38:24.706 } 00:38:24.706 Got JSON-RPC error response 00:38:24.706 response: 00:38:24.706 { 00:38:24.706 "code": -32602, 00:38:24.706 "message": "Invalid parameters" 00:38:24.706 } 00:38:24.706 07:36:46 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:24.706 07:36:46 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:24.706 07:36:46 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:24.706 07:36:46 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:24.706 07:36:46 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:24.706 07:36:46 keyring_file -- keyring/file.sh@47 -- # bperfpid=3852547 00:38:24.706 07:36:46 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3852547 /var/tmp/bperf.sock 00:38:24.706 07:36:46 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3852547 ']' 00:38:24.706 07:36:46 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:24.706 07:36:46 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:24.706 07:36:46 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:24.706 07:36:46 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:24.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:24.706 07:36:46 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:24.706 07:36:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:24.706 [2024-11-20 07:36:46.852168] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:38:24.706 [2024-11-20 07:36:46.852232] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852547 ] 00:38:24.706 [2024-11-20 07:36:46.944312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.967 [2024-11-20 07:36:46.998981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:25.541 07:36:47 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:25.541 07:36:47 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:25.541 07:36:47 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZaMFljMB2T 00:38:25.541 07:36:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZaMFljMB2T 00:38:25.802 07:36:47 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0UJK5Hkqbz 00:38:25.802 07:36:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0UJK5Hkqbz 00:38:25.802 07:36:48 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:25.802 07:36:48 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:25.802 07:36:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:25.802 07:36:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.802 07:36:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:26.063 07:36:48 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ZaMFljMB2T == \/\t\m\p\/\t\m\p\.\Z\a\M\F\l\j\M\B\2\T ]] 00:38:26.063 07:36:48 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:26.063 07:36:48 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:26.063 07:36:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:26.063 07:36:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:26.063 07:36:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:26.324 07:36:48 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.0UJK5Hkqbz == \/\t\m\p\/\t\m\p\.\0\U\J\K\5\H\k\q\b\z ]] 00:38:26.324 07:36:48 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:26.324 07:36:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:26.324 07:36:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:26.324 07:36:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:26.324 07:36:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:26.324 07:36:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:26.324 07:36:48 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:26.325 07:36:48 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:26.325 07:36:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:26.325 07:36:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:26.325 07:36:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:26.325 07:36:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:26.325 07:36:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:26.586 07:36:48 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:26.586 07:36:48 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:26.586 07:36:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:26.847 [2024-11-20 07:36:48.943714] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:26.847 nvme0n1 00:38:26.847 07:36:49 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:26.847 07:36:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:26.847 07:36:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:26.847 07:36:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:26.847 07:36:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:26.847 07:36:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.113 07:36:49 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:27.113 07:36:49 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:27.113 07:36:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:27.113 07:36:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:27.113 07:36:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:27.113 07:36:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.114 07:36:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:27.380 07:36:49 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:27.380 07:36:49 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:27.380 Running I/O for 1 seconds... 00:38:28.323 18757.00 IOPS, 73.27 MiB/s 00:38:28.323 Latency(us) 00:38:28.323 [2024-11-20T06:36:50.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:28.323 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:28.323 nvme0n1 : 1.00 18794.19 73.41 0.00 0.00 6790.88 3631.79 13052.59 00:38:28.323 [2024-11-20T06:36:50.601Z] =================================================================================================================== 00:38:28.323 [2024-11-20T06:36:50.601Z] Total : 18794.19 73.41 0.00 0.00 6790.88 3631.79 13052.59 00:38:28.323 { 00:38:28.323 "results": [ 00:38:28.323 { 00:38:28.323 "job": "nvme0n1", 00:38:28.323 "core_mask": "0x2", 00:38:28.323 "workload": "randrw", 00:38:28.323 "percentage": 50, 00:38:28.323 "status": "finished", 00:38:28.323 "queue_depth": 128, 00:38:28.323 "io_size": 4096, 00:38:28.323 "runtime": 1.004938, 00:38:28.323 "iops": 18794.194268701154, 00:38:28.323 "mibps": 73.41482136211388, 00:38:28.323 "io_failed": 0, 00:38:28.323 "io_timeout": 0, 00:38:28.323 "avg_latency_us": 6790.877540459928, 00:38:28.323 "min_latency_us": 3631.786666666667, 00:38:28.323 "max_latency_us": 13052.586666666666 00:38:28.323 } 00:38:28.323 ], 00:38:28.323 "core_count": 1 00:38:28.323 } 00:38:28.323 07:36:50 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:28.323 07:36:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:28.584 07:36:50 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:28.585 07:36:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:28.585 07:36:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:28.585 07:36:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:28.585 07:36:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:28.585 07:36:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:28.845 07:36:50 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:28.845 07:36:50 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:28.845 07:36:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:28.845 07:36:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:28.845 07:36:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:28.845 07:36:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:28.845 07:36:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:29.107 07:36:51 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:29.107 07:36:51 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:29.107 07:36:51 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:29.107 07:36:51 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:29.107 07:36:51 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:29.107 07:36:51 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:29.107 07:36:51 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:29.107 07:36:51 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:29.107 07:36:51 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:29.107 07:36:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:29.107 [2024-11-20 07:36:51.292996] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:29.107 [2024-11-20 07:36:51.293613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x940740 (107): Transport endpoint is not connected 00:38:29.107 [2024-11-20 07:36:51.294609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x940740 (9): Bad file descriptor 00:38:29.107 [2024-11-20 07:36:51.295611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:29.107 [2024-11-20 07:36:51.295617] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:29.107 [2024-11-20 07:36:51.295623] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:29.107 [2024-11-20 07:36:51.295629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:29.107 request: 00:38:29.107 { 00:38:29.107 "name": "nvme0", 00:38:29.107 "trtype": "tcp", 00:38:29.107 "traddr": "127.0.0.1", 00:38:29.107 "adrfam": "ipv4", 00:38:29.107 "trsvcid": "4420", 00:38:29.107 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:29.107 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:29.107 "prchk_reftag": false, 00:38:29.107 "prchk_guard": false, 00:38:29.107 "hdgst": false, 00:38:29.107 "ddgst": false, 00:38:29.107 "psk": "key1", 00:38:29.107 "allow_unrecognized_csi": false, 00:38:29.107 "method": "bdev_nvme_attach_controller", 00:38:29.107 "req_id": 1 00:38:29.107 } 00:38:29.107 Got JSON-RPC error response 00:38:29.107 response: 00:38:29.107 { 00:38:29.107 "code": -5, 00:38:29.107 "message": "Input/output error" 00:38:29.107 } 00:38:29.107 07:36:51 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:29.107 07:36:51 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:29.107 07:36:51 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:29.107 07:36:51 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:29.107 07:36:51 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:29.107 07:36:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:29.107 07:36:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:29.107 07:36:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:29.107 07:36:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:29.107 07:36:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:29.373 07:36:51 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:29.373 07:36:51 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:29.373 07:36:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:29.373 07:36:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:29.373 07:36:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:29.373 07:36:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:29.373 07:36:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:29.691 07:36:51 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:29.691 07:36:51 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:29.691 07:36:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:29.691 07:36:51 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:29.691 07:36:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:30.027 07:36:52 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:30.027 07:36:52 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:30.027 07:36:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:30.027 07:36:52 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:30.027 07:36:52 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.ZaMFljMB2T 00:38:30.027 07:36:52 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZaMFljMB2T 00:38:30.027 07:36:52 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:30.027 07:36:52 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZaMFljMB2T 00:38:30.027 07:36:52 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:30.027 07:36:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:30.027 07:36:52 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:30.027 07:36:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:30.027 07:36:52 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZaMFljMB2T 00:38:30.027 07:36:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZaMFljMB2T 00:38:30.339 [2024-11-20 07:36:52.394461] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZaMFljMB2T': 0100660 00:38:30.339 [2024-11-20 07:36:52.394481] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:30.339 request: 00:38:30.339 { 00:38:30.339 "name": "key0", 00:38:30.339 "path": "/tmp/tmp.ZaMFljMB2T", 00:38:30.339 "method": "keyring_file_add_key", 00:38:30.339 "req_id": 1 00:38:30.339 } 00:38:30.339 Got JSON-RPC error response 00:38:30.339 response: 00:38:30.339 { 00:38:30.339 "code": -1, 00:38:30.339 "message": "Operation not permitted" 00:38:30.339 } 00:38:30.339 07:36:52 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:30.339 07:36:52 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:30.339 07:36:52 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:30.339 07:36:52 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:30.339 07:36:52 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.ZaMFljMB2T 00:38:30.339 07:36:52 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZaMFljMB2T 00:38:30.339 07:36:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZaMFljMB2T 00:38:30.339 07:36:52 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.ZaMFljMB2T 00:38:30.339 07:36:52 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:30.339 07:36:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:30.339 07:36:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:30.339 07:36:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:30.339 07:36:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:30.339 07:36:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:30.624 07:36:52 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:30.624 07:36:52 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:30.624 07:36:52 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:30.624 07:36:52 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:30.624 07:36:52 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:30.624 07:36:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:30.624 07:36:52 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:30.624 07:36:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:30.624 07:36:52 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:30.624 07:36:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:30.885 [2024-11-20 07:36:52.919802] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ZaMFljMB2T': No such file or directory 00:38:30.885 [2024-11-20 07:36:52.919816] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:30.885 [2024-11-20 07:36:52.919829] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:30.885 [2024-11-20 07:36:52.919834] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:30.885 [2024-11-20 07:36:52.919840] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:30.885 [2024-11-20 07:36:52.919845] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:30.885 request: 00:38:30.885 { 00:38:30.885 "name": "nvme0", 00:38:30.885 "trtype": "tcp", 00:38:30.885 "traddr": "127.0.0.1", 00:38:30.885 "adrfam": "ipv4", 00:38:30.885 "trsvcid": "4420", 00:38:30.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:30.885 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:30.885 "prchk_reftag": false, 00:38:30.885 "prchk_guard": false, 00:38:30.885 "hdgst": false, 00:38:30.885 "ddgst": false, 00:38:30.885 "psk": "key0", 00:38:30.885 "allow_unrecognized_csi": false, 00:38:30.885 "method": "bdev_nvme_attach_controller", 00:38:30.885 "req_id": 1 00:38:30.885 } 00:38:30.885 Got JSON-RPC error response 00:38:30.885 response: 00:38:30.885 { 00:38:30.885 "code": -19, 00:38:30.885 "message": "No such device" 00:38:30.885 } 00:38:30.885 07:36:52 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:30.885 07:36:52 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:30.885 07:36:52 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:30.885 07:36:52 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:30.885 07:36:52 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:30.885 07:36:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:30.885 07:36:53 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:30.885 07:36:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:30.885 07:36:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:30.885 07:36:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:30.885 07:36:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:30.885 07:36:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:30.885 07:36:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9cvHJNPOpS 00:38:30.885 07:36:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:30.885 07:36:53 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:30.885 07:36:53 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:30.885 07:36:53 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:30.885 07:36:53 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:30.885 07:36:53 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:30.885 07:36:53 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:31.146 07:36:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9cvHJNPOpS 00:38:31.146 07:36:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9cvHJNPOpS 00:38:31.146 07:36:53 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.9cvHJNPOpS 00:38:31.146 07:36:53 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9cvHJNPOpS 00:38:31.146 07:36:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9cvHJNPOpS 00:38:31.146 07:36:53 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:31.146 07:36:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:31.407 nvme0n1 00:38:31.407 07:36:53 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:31.407 07:36:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:31.407 07:36:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:31.407 07:36:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:31.407 07:36:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:31.407 07:36:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:31.668 07:36:53 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:31.668 07:36:53 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:31.668 07:36:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:31.928 07:36:53 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:31.928 07:36:53 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:31.928 07:36:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:31.928 07:36:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:31.928 07:36:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:31.928 07:36:54 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:31.928 07:36:54 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:31.928 07:36:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:31.928 07:36:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:31.928 07:36:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:31.928 07:36:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:31.928 07:36:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.189 07:36:54 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:32.189 07:36:54 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:32.189 07:36:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:32.450 07:36:54 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:32.450 07:36:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.450 07:36:54 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:32.450 07:36:54 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:32.450 07:36:54 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9cvHJNPOpS 00:38:32.450 07:36:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9cvHJNPOpS 00:38:32.711 07:36:54 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0UJK5Hkqbz 00:38:32.711 07:36:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0UJK5Hkqbz 00:38:32.972 07:36:54 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:32.972 07:36:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:32.972 nvme0n1 00:38:32.972 07:36:55 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:32.972 07:36:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:33.233 07:36:55 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:33.233 "subsystems": [ 00:38:33.233 { 00:38:33.233 "subsystem": "keyring", 00:38:33.233 "config": [ 00:38:33.233 { 00:38:33.233 "method": "keyring_file_add_key", 00:38:33.233 "params": { 00:38:33.233 "name": "key0", 00:38:33.233 "path": "/tmp/tmp.9cvHJNPOpS" 00:38:33.233 } 00:38:33.233 }, 00:38:33.233 { 00:38:33.233 "method": "keyring_file_add_key", 00:38:33.233 "params": { 00:38:33.233 "name": "key1", 00:38:33.233 "path": "/tmp/tmp.0UJK5Hkqbz" 00:38:33.234 } 00:38:33.234 } 00:38:33.234 ] 00:38:33.234 }, 00:38:33.234 { 00:38:33.234 "subsystem": "iobuf", 00:38:33.234 "config": [ 00:38:33.234 { 00:38:33.234 "method": "iobuf_set_options", 00:38:33.234 "params": { 00:38:33.234 "small_pool_count": 8192, 00:38:33.234 "large_pool_count": 1024, 00:38:33.234 "small_bufsize": 8192, 00:38:33.234 "large_bufsize": 135168, 00:38:33.234 "enable_numa": false 00:38:33.234 } 00:38:33.234 } 00:38:33.234 ] 00:38:33.234 }, 00:38:33.234 { 00:38:33.234 "subsystem": "sock", 00:38:33.234 "config": [ 00:38:33.234 { 00:38:33.234 "method": "sock_set_default_impl", 00:38:33.234 "params": { 00:38:33.234 "impl_name": "posix" 00:38:33.234 } 00:38:33.234 }, 00:38:33.234 { 00:38:33.234 "method": "sock_impl_set_options", 00:38:33.234 "params": { 00:38:33.234 "impl_name": "ssl", 00:38:33.234 "recv_buf_size": 4096, 00:38:33.234 "send_buf_size": 4096, 00:38:33.234 "enable_recv_pipe": true, 00:38:33.234 "enable_quickack": false, 00:38:33.234 "enable_placement_id": 0, 00:38:33.234 "enable_zerocopy_send_server": true, 00:38:33.234 "enable_zerocopy_send_client": false, 00:38:33.234 "zerocopy_threshold": 0, 00:38:33.234 "tls_version": 0, 00:38:33.234 "enable_ktls": false 00:38:33.234 } 00:38:33.234 }, 00:38:33.234 { 00:38:33.234 "method": "sock_impl_set_options", 00:38:33.234 "params": { 00:38:33.234 "impl_name": "posix", 00:38:33.234 "recv_buf_size": 2097152, 00:38:33.234 "send_buf_size": 2097152, 00:38:33.234 "enable_recv_pipe": true, 00:38:33.234 "enable_quickack": false, 00:38:33.234 "enable_placement_id": 0, 00:38:33.234 "enable_zerocopy_send_server": true, 00:38:33.234 "enable_zerocopy_send_client": false, 00:38:33.234 "zerocopy_threshold": 0, 00:38:33.234 "tls_version": 0, 00:38:33.234 "enable_ktls": false 00:38:33.234 } 00:38:33.234 } 00:38:33.234 ] 00:38:33.234 }, 00:38:33.234 { 00:38:33.234 "subsystem": "vmd", 00:38:33.234 "config": [] 00:38:33.234 }, 00:38:33.234 { 00:38:33.234 "subsystem": "accel", 00:38:33.234 "config": [ 00:38:33.234 { 00:38:33.234 "method": "accel_set_options", 00:38:33.234 "params": { 00:38:33.234 "small_cache_size": 128, 00:38:33.234 "large_cache_size": 16, 00:38:33.234 "task_count": 2048, 00:38:33.234 "sequence_count": 2048, 00:38:33.234 "buf_count": 2048 00:38:33.234 } 00:38:33.234 } 00:38:33.234 ] 00:38:33.234 }, 00:38:33.234 { 00:38:33.234 "subsystem": "bdev", 00:38:33.234 "config": [ 00:38:33.234 { 00:38:33.234 "method": "bdev_set_options", 00:38:33.234 "params": { 00:38:33.234 "bdev_io_pool_size": 65535, 00:38:33.234 "bdev_io_cache_size": 256, 00:38:33.234 "bdev_auto_examine": true, 00:38:33.234 "iobuf_small_cache_size": 128, 00:38:33.234 "iobuf_large_cache_size": 16 00:38:33.234 } 00:38:33.234 }, 00:38:33.234 { 00:38:33.234 "method": "bdev_raid_set_options", 00:38:33.234 "params": { 00:38:33.234 "process_window_size_kb": 1024, 00:38:33.234 "process_max_bandwidth_mb_sec": 0 00:38:33.234 } 00:38:33.234 }, 00:38:33.234 { 00:38:33.234 "method": "bdev_iscsi_set_options", 00:38:33.234 "params": { 00:38:33.234 "timeout_sec": 30 00:38:33.234 } 00:38:33.234 }, 00:38:33.234 { 00:38:33.234 "method": "bdev_nvme_set_options", 00:38:33.234 "params": { 00:38:33.234 "action_on_timeout": "none", 00:38:33.234 "timeout_us": 0, 00:38:33.234 "timeout_admin_us": 0, 00:38:33.234 "keep_alive_timeout_ms": 10000, 00:38:33.234 "arbitration_burst": 0, 00:38:33.234 "low_priority_weight": 0, 00:38:33.234 "medium_priority_weight": 0, 00:38:33.234 "high_priority_weight": 0, 00:38:33.234 "nvme_adminq_poll_period_us": 10000, 00:38:33.234 "nvme_ioq_poll_period_us": 0, 00:38:33.234 "io_queue_requests": 512, 00:38:33.234 "delay_cmd_submit": true, 00:38:33.234 "transport_retry_count": 4, 00:38:33.234 "bdev_retry_count": 3, 00:38:33.234 "transport_ack_timeout": 0, 00:38:33.234 "ctrlr_loss_timeout_sec": 0, 00:38:33.234 "reconnect_delay_sec": 0, 00:38:33.234 "fast_io_fail_timeout_sec": 0, 00:38:33.234 "disable_auto_failback": false, 00:38:33.234 "generate_uuids": false, 00:38:33.234 "transport_tos": 0, 00:38:33.234 "nvme_error_stat": false, 00:38:33.234 "rdma_srq_size": 0, 00:38:33.234 "io_path_stat": false, 00:38:33.234 "allow_accel_sequence": false, 00:38:33.234 "rdma_max_cq_size": 0, 00:38:33.234 "rdma_cm_event_timeout_ms": 0, 00:38:33.234 "dhchap_digests": [ 00:38:33.234 "sha256", 00:38:33.234 "sha384", 00:38:33.234 "sha512" 00:38:33.234 ], 00:38:33.234 "dhchap_dhgroups": [ 00:38:33.234 "null", 00:38:33.234 "ffdhe2048", 00:38:33.234 "ffdhe3072", 00:38:33.234 "ffdhe4096", 00:38:33.234 "ffdhe6144", 00:38:33.234 "ffdhe8192" 00:38:33.234 ] 00:38:33.234 } 00:38:33.234 }, 00:38:33.234 { 00:38:33.234 "method": "bdev_nvme_attach_controller", 00:38:33.234 "params": { 00:38:33.234 "name": "nvme0", 00:38:33.234 "trtype": "TCP", 00:38:33.234 "adrfam": "IPv4", 00:38:33.234 "traddr": "127.0.0.1", 00:38:33.234 "trsvcid": "4420", 00:38:33.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:33.234 "prchk_reftag": false, 00:38:33.234 "prchk_guard": false, 00:38:33.234 "ctrlr_loss_timeout_sec": 0, 00:38:33.234 "reconnect_delay_sec": 0, 00:38:33.234 "fast_io_fail_timeout_sec": 0, 00:38:33.234 "psk": "key0", 00:38:33.234 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:33.234 "hdgst": false, 00:38:33.234 "ddgst": false, 00:38:33.234 "multipath": "multipath" 00:38:33.234 } 00:38:33.234 }, 00:38:33.234 { 00:38:33.234 "method": "bdev_nvme_set_hotplug", 00:38:33.234 "params": { 00:38:33.234 "period_us": 100000, 00:38:33.234 "enable": false 00:38:33.234 } 00:38:33.234 }, 00:38:33.234 { 00:38:33.234 "method": "bdev_wait_for_examine" 00:38:33.234 } 00:38:33.234 ] 00:38:33.234 }, 00:38:33.235 { 00:38:33.235 "subsystem": "nbd", 00:38:33.235 "config": [] 00:38:33.235 } 00:38:33.235 ] 00:38:33.235 }' 00:38:33.235 07:36:55 keyring_file -- keyring/file.sh@115 -- # killprocess 3852547 00:38:33.235 07:36:55 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3852547 ']' 00:38:33.235 07:36:55 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3852547 00:38:33.235 07:36:55 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:33.235 07:36:55 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:33.235 07:36:55 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3852547 00:38:33.497 07:36:55 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:33.497 07:36:55 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:33.497 07:36:55 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3852547' 00:38:33.497 killing process with pid 3852547 00:38:33.497 07:36:55 keyring_file -- common/autotest_common.sh@971 -- # kill 3852547 00:38:33.497 Received shutdown signal, test time was about 1.000000 seconds 00:38:33.497 00:38:33.497 Latency(us) 00:38:33.497 [2024-11-20T06:36:55.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:33.497 [2024-11-20T06:36:55.775Z] =================================================================================================================== 00:38:33.497 [2024-11-20T06:36:55.775Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:33.497 07:36:55 keyring_file -- common/autotest_common.sh@976 -- # wait 3852547 00:38:33.497 07:36:55 keyring_file -- keyring/file.sh@118 -- # bperfpid=3854368 00:38:33.497 07:36:55 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3854368 /var/tmp/bperf.sock 00:38:33.497 07:36:55 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3854368 ']' 00:38:33.497 07:36:55 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:33.497 07:36:55 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:33.497 07:36:55 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:33.497 07:36:55 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:33.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:33.497 07:36:55 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:33.497 07:36:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:33.497 07:36:55 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:33.497 "subsystems": [ 00:38:33.497 { 00:38:33.497 "subsystem": "keyring", 00:38:33.497 "config": [ 00:38:33.497 { 00:38:33.497 "method": "keyring_file_add_key", 00:38:33.497 "params": { 00:38:33.497 "name": "key0", 00:38:33.497 "path": "/tmp/tmp.9cvHJNPOpS" 00:38:33.497 } 00:38:33.497 }, 00:38:33.497 { 00:38:33.497 "method": "keyring_file_add_key", 00:38:33.497 "params": { 00:38:33.497 "name": "key1", 00:38:33.497 "path": "/tmp/tmp.0UJK5Hkqbz" 00:38:33.497 } 00:38:33.497 } 00:38:33.497 ] 00:38:33.497 }, 00:38:33.497 { 00:38:33.497 "subsystem": "iobuf", 00:38:33.497 "config": [ 00:38:33.497 { 00:38:33.497 "method": "iobuf_set_options", 00:38:33.497 "params": { 00:38:33.497 "small_pool_count": 8192, 00:38:33.497 "large_pool_count": 1024, 00:38:33.497 "small_bufsize": 8192, 00:38:33.497 "large_bufsize": 135168, 00:38:33.497 "enable_numa": false 00:38:33.497 } 00:38:33.497 } 00:38:33.497 ] 00:38:33.497 }, 00:38:33.497 { 00:38:33.497 "subsystem": "sock", 00:38:33.497 "config": [ 00:38:33.497 { 00:38:33.497 "method": "sock_set_default_impl", 00:38:33.497 "params": { 00:38:33.497 "impl_name": "posix" 00:38:33.497 } 00:38:33.497 }, 00:38:33.497 { 00:38:33.497 "method": "sock_impl_set_options", 00:38:33.497 "params": { 00:38:33.497 "impl_name": "ssl", 00:38:33.497 "recv_buf_size": 4096, 00:38:33.497 "send_buf_size": 4096, 00:38:33.497 "enable_recv_pipe": true, 00:38:33.497 "enable_quickack": false, 00:38:33.497 "enable_placement_id": 0, 00:38:33.497 "enable_zerocopy_send_server": true, 00:38:33.497 "enable_zerocopy_send_client": false, 00:38:33.497 "zerocopy_threshold": 0, 00:38:33.497 "tls_version": 0, 00:38:33.497 "enable_ktls": false 00:38:33.497 } 00:38:33.497 }, 00:38:33.497 { 00:38:33.497 "method": "sock_impl_set_options", 00:38:33.497 "params": { 00:38:33.497 "impl_name": "posix", 00:38:33.497 "recv_buf_size": 2097152, 00:38:33.497 "send_buf_size": 2097152, 00:38:33.497 "enable_recv_pipe": true, 00:38:33.497 "enable_quickack": false, 00:38:33.497 "enable_placement_id": 0, 00:38:33.497 "enable_zerocopy_send_server": true, 00:38:33.497 "enable_zerocopy_send_client": false, 00:38:33.497 "zerocopy_threshold": 0, 00:38:33.497 "tls_version": 0, 00:38:33.497 "enable_ktls": false 00:38:33.497 } 00:38:33.497 } 00:38:33.497 ] 00:38:33.497 }, 00:38:33.497 { 00:38:33.497 "subsystem": "vmd", 00:38:33.497 "config": [] 00:38:33.497 }, 00:38:33.497 { 00:38:33.497 "subsystem": "accel", 00:38:33.497 "config": [ 00:38:33.497 { 00:38:33.497 "method": "accel_set_options", 00:38:33.497 "params": { 00:38:33.497 "small_cache_size": 128, 00:38:33.497 "large_cache_size": 16, 00:38:33.497 "task_count": 2048, 00:38:33.497 "sequence_count": 2048, 00:38:33.497 "buf_count": 2048 00:38:33.497 } 00:38:33.497 } 00:38:33.497 ] 00:38:33.497 }, 00:38:33.497 { 00:38:33.497 "subsystem": "bdev", 00:38:33.497 "config": [ 00:38:33.497 { 00:38:33.497 "method": "bdev_set_options", 00:38:33.497 "params": { 00:38:33.497 "bdev_io_pool_size": 65535, 00:38:33.497 "bdev_io_cache_size": 256, 00:38:33.497 "bdev_auto_examine": true, 00:38:33.497 "iobuf_small_cache_size": 128, 00:38:33.497 "iobuf_large_cache_size": 16 00:38:33.497 } 00:38:33.497 }, 00:38:33.497 { 00:38:33.497 "method": "bdev_raid_set_options", 00:38:33.497 "params": { 00:38:33.497 "process_window_size_kb": 1024, 00:38:33.497 "process_max_bandwidth_mb_sec": 0 00:38:33.497 } 00:38:33.497 }, 00:38:33.497 { 00:38:33.497 "method": "bdev_iscsi_set_options", 00:38:33.497 "params": { 00:38:33.497 "timeout_sec": 30 00:38:33.497 } 00:38:33.497 }, 00:38:33.497 { 00:38:33.497 "method": "bdev_nvme_set_options", 00:38:33.497 "params": { 00:38:33.497 "action_on_timeout": "none", 00:38:33.497 "timeout_us": 0, 00:38:33.497 "timeout_admin_us": 0, 00:38:33.497 "keep_alive_timeout_ms": 10000, 00:38:33.497 "arbitration_burst": 0, 00:38:33.497 "low_priority_weight": 0, 00:38:33.497 "medium_priority_weight": 0, 00:38:33.497 "high_priority_weight": 0, 00:38:33.497 "nvme_adminq_poll_period_us": 10000, 00:38:33.497 "nvme_ioq_poll_period_us": 0, 00:38:33.497 "io_queue_requests": 512, 00:38:33.497 "delay_cmd_submit": true, 00:38:33.497 "transport_retry_count": 4, 00:38:33.497 "bdev_retry_count": 3, 00:38:33.497 "transport_ack_timeout": 0, 00:38:33.497 "ctrlr_loss_timeout_sec": 0, 00:38:33.497 "reconnect_delay_sec": 0, 00:38:33.497 "fast_io_fail_timeout_sec": 0, 00:38:33.497 "disable_auto_failback": false, 00:38:33.497 "generate_uuids": false, 00:38:33.497 "transport_tos": 0, 00:38:33.497 "nvme_error_stat": false, 00:38:33.497 "rdma_srq_size": 0, 00:38:33.497 "io_path_stat": false, 00:38:33.497 "allow_accel_sequence": false, 00:38:33.497 "rdma_max_cq_size": 0, 00:38:33.497 "rdma_cm_event_timeout_ms": 0, 00:38:33.497 "dhchap_digests": [ 00:38:33.497 "sha256", 00:38:33.497 "sha384", 00:38:33.497 "sha512" 00:38:33.497 ], 00:38:33.497 "dhchap_dhgroups": [ 00:38:33.497 "null", 00:38:33.497 "ffdhe2048", 00:38:33.497 "ffdhe3072", 00:38:33.497 "ffdhe4096", 00:38:33.497 "ffdhe6144", 00:38:33.497 "ffdhe8192" 00:38:33.497 ] 00:38:33.497 } 00:38:33.497 }, 00:38:33.497 { 00:38:33.497 "method": "bdev_nvme_attach_controller", 00:38:33.497 "params": { 00:38:33.497 "name": "nvme0", 00:38:33.497 "trtype": "TCP", 00:38:33.497 "adrfam": "IPv4", 00:38:33.497 "traddr": "127.0.0.1", 00:38:33.497 "trsvcid": "4420", 00:38:33.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:33.497 "prchk_reftag": false, 00:38:33.498 "prchk_guard": false, 00:38:33.498 "ctrlr_loss_timeout_sec": 0, 00:38:33.498 "reconnect_delay_sec": 0, 00:38:33.498 "fast_io_fail_timeout_sec": 0, 00:38:33.498 "psk": "key0", 00:38:33.498 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:33.498 "hdgst": false, 00:38:33.498 "ddgst": false, 00:38:33.498 "multipath": "multipath" 00:38:33.498 } 00:38:33.498 }, 00:38:33.498 { 00:38:33.498 "method": "bdev_nvme_set_hotplug", 00:38:33.498 "params": { 00:38:33.498 "period_us": 100000, 00:38:33.498 "enable": false 00:38:33.498 } 00:38:33.498 }, 00:38:33.498 { 00:38:33.498 "method": "bdev_wait_for_examine" 00:38:33.498 } 00:38:33.498 ] 00:38:33.498 }, 00:38:33.498 { 00:38:33.498 "subsystem": "nbd", 00:38:33.498 "config": [] 00:38:33.498 } 00:38:33.498 ] 00:38:33.498 }' 00:38:33.498 [2024-11-20 07:36:55.688594] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:38:33.498 [2024-11-20 07:36:55.688653] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854368 ] 00:38:33.759 [2024-11-20 07:36:55.772046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.759 [2024-11-20 07:36:55.801404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:33.759 [2024-11-20 07:36:55.944084] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:34.330 07:36:56 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:34.330 07:36:56 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:34.330 07:36:56 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:34.330 07:36:56 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:34.330 07:36:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:34.592 07:36:56 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:34.592 07:36:56 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:34.592 07:36:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:34.592 07:36:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:34.592 07:36:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:34.592 07:36:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:34.592 07:36:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:34.592 07:36:56 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:34.592 07:36:56 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:34.592 07:36:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:34.592 07:36:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:34.592 07:36:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:34.592 07:36:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:34.592 07:36:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:34.852 07:36:57 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:34.852 07:36:57 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:34.852 07:36:57 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:34.852 07:36:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:35.113 07:36:57 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:35.113 07:36:57 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:35.113 07:36:57 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.9cvHJNPOpS /tmp/tmp.0UJK5Hkqbz 00:38:35.113 07:36:57 keyring_file -- keyring/file.sh@20 -- # killprocess 3854368 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3854368 ']' 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3854368 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3854368 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3854368' 00:38:35.113 killing process with pid 3854368 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@971 -- # kill 3854368 00:38:35.113 Received shutdown signal, test time was about 1.000000 seconds 00:38:35.113 00:38:35.113 Latency(us) 00:38:35.113 [2024-11-20T06:36:57.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:35.113 [2024-11-20T06:36:57.391Z] =================================================================================================================== 00:38:35.113 [2024-11-20T06:36:57.391Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@976 -- # wait 3854368 00:38:35.113 07:36:57 keyring_file -- keyring/file.sh@21 -- # killprocess 3852497 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3852497 ']' 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3852497 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:35.113 07:36:57 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3852497 00:38:35.373 07:36:57 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:35.373 07:36:57 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:35.373 07:36:57 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3852497' 00:38:35.373 killing process with pid 3852497 00:38:35.373 07:36:57 keyring_file -- common/autotest_common.sh@971 -- # kill 3852497 00:38:35.373 07:36:57 keyring_file -- common/autotest_common.sh@976 -- # wait 3852497 00:38:35.373 00:38:35.373 real 0m12.126s 00:38:35.373 user 0m29.237s 00:38:35.373 sys 0m2.756s 00:38:35.373 07:36:57 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:35.373 07:36:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:35.373 ************************************ 00:38:35.373 END TEST keyring_file 00:38:35.373 ************************************ 00:38:35.634 07:36:57 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:38:35.634 07:36:57 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:35.634 07:36:57 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:38:35.634 07:36:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:35.634 07:36:57 -- common/autotest_common.sh@10 -- # set +x 00:38:35.634 ************************************ 00:38:35.634 START TEST keyring_linux 00:38:35.634 ************************************ 00:38:35.634 07:36:57 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:35.634 Joined session keyring: 980954577 00:38:35.634 * Looking for test storage... 00:38:35.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:35.634 07:36:57 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:35.634 07:36:57 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:38:35.635 07:36:57 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:35.635 07:36:57 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:35.635 07:36:57 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:35.635 07:36:57 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:35.635 07:36:57 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:35.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.635 --rc genhtml_branch_coverage=1 00:38:35.635 --rc genhtml_function_coverage=1 00:38:35.635 --rc genhtml_legend=1 00:38:35.635 --rc geninfo_all_blocks=1 00:38:35.635 --rc geninfo_unexecuted_blocks=1 00:38:35.635 00:38:35.635 ' 00:38:35.635 07:36:57 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:35.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.635 --rc genhtml_branch_coverage=1 00:38:35.635 --rc genhtml_function_coverage=1 00:38:35.635 --rc genhtml_legend=1 00:38:35.635 --rc geninfo_all_blocks=1 00:38:35.635 --rc geninfo_unexecuted_blocks=1 00:38:35.635 00:38:35.635 ' 00:38:35.635 07:36:57 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:35.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.635 --rc genhtml_branch_coverage=1 00:38:35.635 --rc genhtml_function_coverage=1 00:38:35.635 --rc genhtml_legend=1 00:38:35.635 --rc geninfo_all_blocks=1 00:38:35.635 --rc geninfo_unexecuted_blocks=1 00:38:35.635 00:38:35.635 ' 00:38:35.635 07:36:57 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:35.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.635 --rc genhtml_branch_coverage=1 00:38:35.635 --rc genhtml_function_coverage=1 00:38:35.635 --rc genhtml_legend=1 00:38:35.635 --rc geninfo_all_blocks=1 00:38:35.635 --rc geninfo_unexecuted_blocks=1 00:38:35.635 00:38:35.635 ' 00:38:35.635 07:36:57 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:35.635 07:36:57 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:35.635 07:36:57 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:35.635 07:36:57 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:35.635 07:36:57 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:35.635 07:36:57 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:35.635 07:36:57 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:35.635 07:36:57 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:35.635 07:36:57 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:35.635 07:36:57 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:35.635 07:36:57 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:35.635 07:36:57 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:35.635 07:36:57 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:35.897 07:36:57 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:35.897 07:36:57 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:35.897 07:36:57 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:35.897 07:36:57 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:35.897 07:36:57 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:35.897 07:36:57 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:35.897 07:36:57 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:35.897 07:36:57 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:35.898 07:36:57 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:35.898 07:36:57 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:35.898 07:36:57 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:35.898 07:36:57 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.898 07:36:57 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.898 07:36:57 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.898 07:36:57 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:35.898 07:36:57 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:35.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:35.898 07:36:57 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:35.898 07:36:57 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:35.898 07:36:57 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:35.898 07:36:57 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:35.898 07:36:57 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:35.898 07:36:57 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:35.898 /tmp/:spdk-test:key0 00:38:35.898 07:36:57 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:35.898 07:36:57 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:35.898 07:36:57 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:35.898 07:36:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:35.898 07:36:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:35.898 /tmp/:spdk-test:key1 00:38:35.898 07:36:58 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3854840 00:38:35.898 07:36:58 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3854840 00:38:35.898 07:36:58 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:35.898 07:36:58 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3854840 ']' 00:38:35.898 07:36:58 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:35.898 07:36:58 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:35.898 07:36:58 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:35.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:35.898 07:36:58 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:35.898 07:36:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:35.898 [2024-11-20 07:36:58.087055] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:38:35.898 [2024-11-20 07:36:58.087137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854840 ] 00:38:36.160 [2024-11-20 07:36:58.173736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.160 [2024-11-20 07:36:58.208186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.730 07:36:58 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:36.731 07:36:58 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:36.731 07:36:58 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:36.731 07:36:58 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.731 07:36:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:36.731 [2024-11-20 07:36:58.871525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:36.731 null0 00:38:36.731 [2024-11-20 07:36:58.903585] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:36.731 [2024-11-20 07:36:58.903937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:36.731 07:36:58 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.731 07:36:58 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:36.731 733716049 00:38:36.731 07:36:58 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:36.731 398018595 00:38:36.731 07:36:58 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3855140 00:38:36.731 07:36:58 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3855140 /var/tmp/bperf.sock 00:38:36.731 07:36:58 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:36.731 07:36:58 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3855140 ']' 00:38:36.731 07:36:58 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:36.731 07:36:58 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:36.731 07:36:58 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:36.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:36.731 07:36:58 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:36.731 07:36:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:36.731 [2024-11-20 07:36:58.982397] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:38:36.731 [2024-11-20 07:36:58.982444] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855140 ] 00:38:36.992 [2024-11-20 07:36:59.064076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.992 [2024-11-20 07:36:59.094289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:37.563 07:36:59 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:37.563 07:36:59 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:37.563 07:36:59 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:37.563 07:36:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:37.824 07:36:59 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:37.824 07:36:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:38.085 07:37:00 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:38.085 07:37:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:38.085 [2024-11-20 07:37:00.290631] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:38.346 nvme0n1 00:38:38.346 07:37:00 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:38.346 07:37:00 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:38.346 07:37:00 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:38.346 07:37:00 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:38.346 07:37:00 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:38.346 07:37:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.346 07:37:00 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:38.346 07:37:00 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:38.346 07:37:00 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:38.346 07:37:00 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:38.346 07:37:00 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.346 07:37:00 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:38.346 07:37:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.606 07:37:00 keyring_linux -- keyring/linux.sh@25 -- # sn=733716049 00:38:38.606 07:37:00 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:38.607 07:37:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:38.607 07:37:00 keyring_linux -- keyring/linux.sh@26 -- # [[ 733716049 == \7\3\3\7\1\6\0\4\9 ]] 00:38:38.607 07:37:00 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 733716049 00:38:38.607 07:37:00 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:38.607 07:37:00 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:38.607 Running I/O for 1 seconds... 00:38:39.990 24474.00 IOPS, 95.60 MiB/s 00:38:39.990 Latency(us) 00:38:39.990 [2024-11-20T06:37:02.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:39.990 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:39.990 nvme0n1 : 1.01 24473.95 95.60 0.00 0.00 5214.66 4287.15 10048.85 00:38:39.990 [2024-11-20T06:37:02.268Z] =================================================================================================================== 00:38:39.990 [2024-11-20T06:37:02.269Z] Total : 24473.95 95.60 0.00 0.00 5214.66 4287.15 10048.85 00:38:39.991 { 00:38:39.991 "results": [ 00:38:39.991 { 00:38:39.991 "job": "nvme0n1", 00:38:39.991 "core_mask": "0x2", 00:38:39.991 "workload": "randread", 00:38:39.991 "status": "finished", 00:38:39.991 "queue_depth": 128, 00:38:39.991 "io_size": 4096, 00:38:39.991 "runtime": 1.005232, 00:38:39.991 "iops": 24473.952281662343, 00:38:39.991 "mibps": 95.60137610024353, 00:38:39.991 "io_failed": 0, 00:38:39.991 "io_timeout": 0, 00:38:39.991 "avg_latency_us": 5214.658190391025, 00:38:39.991 "min_latency_us": 4287.1466666666665, 00:38:39.991 "max_latency_us": 10048.853333333333 00:38:39.991 } 00:38:39.991 ], 00:38:39.991 "core_count": 1 00:38:39.991 } 00:38:39.991 07:37:01 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:39.991 07:37:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:39.991 07:37:02 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:39.991 07:37:02 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:39.991 07:37:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:39.991 07:37:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:39.991 07:37:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:39.991 07:37:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:39.991 07:37:02 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:39.991 07:37:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:39.991 07:37:02 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:39.991 07:37:02 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:39.991 07:37:02 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:38:39.991 07:37:02 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:39.991 07:37:02 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:39.991 07:37:02 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:39.991 07:37:02 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:39.991 07:37:02 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:39.991 07:37:02 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:39.991 07:37:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:40.253 [2024-11-20 07:37:02.383014] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:40.253 [2024-11-20 07:37:02.383124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc09c0 (107): Transport endpoint is not connected 00:38:40.253 [2024-11-20 07:37:02.384120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc09c0 (9): Bad file descriptor 00:38:40.253 [2024-11-20 07:37:02.385121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:40.253 [2024-11-20 07:37:02.385128] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:40.253 [2024-11-20 07:37:02.385134] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:40.253 [2024-11-20 07:37:02.385140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:40.253 request: 00:38:40.253 { 00:38:40.253 "name": "nvme0", 00:38:40.253 "trtype": "tcp", 00:38:40.253 "traddr": "127.0.0.1", 00:38:40.253 "adrfam": "ipv4", 00:38:40.253 "trsvcid": "4420", 00:38:40.253 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:40.253 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:40.253 "prchk_reftag": false, 00:38:40.253 "prchk_guard": false, 00:38:40.253 "hdgst": false, 00:38:40.253 "ddgst": false, 00:38:40.253 "psk": ":spdk-test:key1", 00:38:40.253 "allow_unrecognized_csi": false, 00:38:40.253 "method": "bdev_nvme_attach_controller", 00:38:40.253 "req_id": 1 00:38:40.253 } 00:38:40.253 Got JSON-RPC error response 00:38:40.253 response: 00:38:40.253 { 00:38:40.253 "code": -5, 00:38:40.253 "message": "Input/output error" 00:38:40.253 } 00:38:40.253 07:37:02 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:38:40.253 07:37:02 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:40.253 07:37:02 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:40.253 07:37:02 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@33 -- # sn=733716049 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 733716049 00:38:40.253 1 links removed 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@33 -- # sn=398018595 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 398018595 00:38:40.253 1 links removed 00:38:40.253 07:37:02 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3855140 00:38:40.253 07:37:02 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3855140 ']' 00:38:40.253 07:37:02 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3855140 00:38:40.253 07:37:02 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:38:40.253 07:37:02 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:40.253 07:37:02 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3855140 00:38:40.253 07:37:02 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:40.253 07:37:02 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:40.253 07:37:02 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3855140' 00:38:40.253 killing process with pid 3855140 00:38:40.253 07:37:02 keyring_linux -- common/autotest_common.sh@971 -- # kill 3855140 00:38:40.253 Received shutdown signal, test time was about 1.000000 seconds 00:38:40.253 00:38:40.253 Latency(us) 00:38:40.253 [2024-11-20T06:37:02.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.253 [2024-11-20T06:37:02.531Z] =================================================================================================================== 00:38:40.253 [2024-11-20T06:37:02.531Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:40.253 07:37:02 keyring_linux -- common/autotest_common.sh@976 -- # wait 3855140 00:38:40.514 07:37:02 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3854840 00:38:40.514 07:37:02 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3854840 ']' 00:38:40.514 07:37:02 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3854840 00:38:40.514 07:37:02 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:38:40.514 07:37:02 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:40.514 07:37:02 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3854840 00:38:40.514 07:37:02 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:40.514 07:37:02 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:40.514 07:37:02 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3854840' 00:38:40.514 killing process with pid 3854840 00:38:40.514 07:37:02 keyring_linux -- common/autotest_common.sh@971 -- # kill 3854840 00:38:40.514 07:37:02 keyring_linux -- common/autotest_common.sh@976 -- # wait 3854840 00:38:40.775 00:38:40.775 real 0m5.148s 00:38:40.775 user 0m9.497s 00:38:40.775 sys 0m1.461s 00:38:40.775 07:37:02 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:40.775 07:37:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:40.775 ************************************ 00:38:40.775 END TEST keyring_linux 00:38:40.775 ************************************ 00:38:40.775 07:37:02 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:38:40.775 07:37:02 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:40.775 07:37:02 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:40.775 07:37:02 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:38:40.775 07:37:02 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:38:40.775 07:37:02 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:38:40.775 07:37:02 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:40.775 07:37:02 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:40.775 07:37:02 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:40.775 07:37:02 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:38:40.775 07:37:02 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:40.775 07:37:02 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:38:40.775 07:37:02 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:40.775 07:37:02 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:40.775 07:37:02 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:38:40.775 07:37:02 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:38:40.775 07:37:02 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:38:40.775 07:37:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:40.775 07:37:02 -- common/autotest_common.sh@10 -- # set +x 00:38:40.775 07:37:02 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:38:40.775 07:37:02 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:38:40.775 07:37:02 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:38:40.775 07:37:02 -- common/autotest_common.sh@10 -- # set +x 00:38:48.918 INFO: APP EXITING 00:38:48.918 INFO: killing all VMs 00:38:48.918 INFO: killing vhost app 00:38:48.918 WARN: no vhost pid file found 00:38:48.918 INFO: EXIT DONE 00:38:52.216 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:52.216 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:52.216 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:56.419 Cleaning 00:38:56.419 Removing: /var/run/dpdk/spdk0/config 00:38:56.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:56.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:56.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:56.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:56.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:56.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:56.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:56.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:56.419 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:56.419 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:56.419 Removing: /var/run/dpdk/spdk1/config 00:38:56.419 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:56.419 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:56.419 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:56.419 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:56.419 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:56.419 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:56.419 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:56.419 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:56.419 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:56.419 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:56.419 Removing: /var/run/dpdk/spdk2/config 00:38:56.419 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:56.419 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:56.419 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:56.420 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:56.420 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:56.420 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:56.420 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:56.420 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:56.420 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:56.420 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:56.420 Removing: /var/run/dpdk/spdk3/config 00:38:56.420 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:56.420 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:56.420 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:56.420 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:56.420 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:56.420 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:56.420 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:56.420 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:56.420 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:56.420 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:56.420 Removing: /var/run/dpdk/spdk4/config 00:38:56.420 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:56.420 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:56.420 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:56.420 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:56.420 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:56.420 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:56.420 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:56.420 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:56.420 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:56.420 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:56.420 Removing: /dev/shm/bdev_svc_trace.1 00:38:56.420 Removing: /dev/shm/nvmf_trace.0 00:38:56.420 Removing: /dev/shm/spdk_tgt_trace.pid3276307 00:38:56.420 Removing: /var/run/dpdk/spdk0 00:38:56.420 Removing: /var/run/dpdk/spdk1 00:38:56.420 Removing: /var/run/dpdk/spdk2 00:38:56.420 Removing: /var/run/dpdk/spdk3 00:38:56.420 Removing: /var/run/dpdk/spdk4 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3274816 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3276307 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3277154 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3278193 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3278533 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3279606 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3279854 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3280070 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3281210 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3282000 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3282408 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3282894 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3283302 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3283600 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3283755 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3284106 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3284646 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3286060 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3289631 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3289999 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3290365 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3290392 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3291005 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3291092 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3291725 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3291806 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3292173 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3292365 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3292547 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3292807 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3293331 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3293630 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3293918 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3298607 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3303853 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3315848 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3316710 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3321794 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3322158 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3327507 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3334661 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3338281 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3350812 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3361776 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3363868 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3364906 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3385610 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3390538 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3447475 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3454131 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3461017 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3469116 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3469196 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3470216 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3471237 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3472243 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3472914 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3472924 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3473252 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3473271 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3473273 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3474278 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3475281 00:38:56.420 Removing: /var/run/dpdk/spdk_pid3476304 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3477245 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3477283 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3477614 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3478990 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3480137 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3490244 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3524571 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3529980 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3532087 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3534776 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3534979 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3535255 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3535595 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3536334 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3538416 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3539756 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3540408 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3542976 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3543799 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3544600 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3549646 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3556328 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3556330 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3556332 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3560971 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3571299 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3576126 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3583907 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3585412 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3587254 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3588782 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3594475 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3599795 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3604655 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3613859 00:38:56.680 Removing: /var/run/dpdk/spdk_pid3613966 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3619116 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3619283 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3619466 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3620039 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3620128 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3625535 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3626350 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3631693 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3634981 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3642102 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3648716 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3658963 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3667619 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3667627 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3691048 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3691870 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3692731 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3693421 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3694478 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3695159 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3695847 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3696679 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3701911 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3702230 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3709281 00:38:56.681 Removing: /var/run/dpdk/spdk_pid3709661 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3716143 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3721229 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3732837 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3733514 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3738676 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3739073 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3744624 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3751569 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3754619 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3766801 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3777458 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3779350 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3780427 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3800682 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3805148 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3808547 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3816036 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3816069 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3822237 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3824442 00:38:56.941 Removing: /var/run/dpdk/spdk_pid3826829 00:38:56.942 Removing: /var/run/dpdk/spdk_pid3828139 00:38:56.942 Removing: /var/run/dpdk/spdk_pid3830572 00:38:56.942 Removing: /var/run/dpdk/spdk_pid3831871 00:38:56.942 Removing: /var/run/dpdk/spdk_pid3842102 00:38:56.942 Removing: /var/run/dpdk/spdk_pid3842628 00:38:56.942 Removing: /var/run/dpdk/spdk_pid3843710 00:38:56.942 Removing: /var/run/dpdk/spdk_pid3846643 00:38:56.942 Removing: /var/run/dpdk/spdk_pid3847287 00:38:56.942 Removing: /var/run/dpdk/spdk_pid3847713 00:38:56.942 Removing: /var/run/dpdk/spdk_pid3852497 00:38:56.942 Removing: /var/run/dpdk/spdk_pid3852547 00:38:56.942 Removing: /var/run/dpdk/spdk_pid3854368 00:38:56.942 Removing: /var/run/dpdk/spdk_pid3854840 00:38:56.942 Removing: /var/run/dpdk/spdk_pid3855140 00:38:56.942 Clean 00:38:56.942 07:37:19 -- common/autotest_common.sh@1451 -- # return 0 00:38:56.942 07:37:19 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:56.942 07:37:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:56.942 07:37:19 -- common/autotest_common.sh@10 -- # set +x 00:38:57.202 07:37:19 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:57.202 07:37:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:57.202 07:37:19 -- common/autotest_common.sh@10 -- # set +x 00:38:57.202 07:37:19 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:57.202 07:37:19 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:57.202 07:37:19 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:57.202 07:37:19 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:57.202 07:37:19 -- spdk/autotest.sh@394 -- # hostname 00:38:57.202 07:37:19 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:57.202 geninfo: WARNING: invalid characters removed from testname! 00:39:23.798 07:37:44 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:25.712 07:37:47 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:27.097 07:37:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:29.096 07:37:50 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:30.493 07:37:52 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:32.406 07:37:54 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:33.790 07:37:55 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:33.790 07:37:55 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:33.790 07:37:55 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:33.790 07:37:55 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:33.790 07:37:55 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:33.790 07:37:55 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:33.790 + [[ -n 3189387 ]] 00:39:33.790 + sudo kill 3189387 00:39:33.801 [Pipeline] } 00:39:33.816 [Pipeline] // stage 00:39:33.821 [Pipeline] } 00:39:33.836 [Pipeline] // timeout 00:39:33.841 [Pipeline] } 00:39:33.855 [Pipeline] // catchError 00:39:33.860 [Pipeline] } 00:39:33.875 [Pipeline] // wrap 00:39:33.881 [Pipeline] } 00:39:33.895 [Pipeline] // catchError 00:39:33.904 [Pipeline] stage 00:39:33.907 [Pipeline] { (Epilogue) 00:39:33.922 [Pipeline] catchError 00:39:33.923 [Pipeline] { 00:39:33.937 [Pipeline] echo 00:39:33.940 Cleanup processes 00:39:33.946 [Pipeline] sh 00:39:34.237 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:34.237 3868151 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:34.253 [Pipeline] sh 00:39:34.542 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:34.542 ++ grep -v 'sudo pgrep' 00:39:34.542 ++ awk '{print $1}' 00:39:34.542 + sudo kill -9 00:39:34.542 + true 00:39:34.555 [Pipeline] sh 00:39:34.844 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:47.090 [Pipeline] sh 00:39:47.380 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:47.380 Artifacts sizes are good 00:39:47.394 [Pipeline] archiveArtifacts 00:39:47.401 Archiving artifacts 00:39:47.533 [Pipeline] sh 00:39:47.820 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:47.835 [Pipeline] cleanWs 00:39:47.846 [WS-CLEANUP] Deleting project workspace... 00:39:47.846 [WS-CLEANUP] Deferred wipeout is used... 00:39:47.853 [WS-CLEANUP] done 00:39:47.855 [Pipeline] } 00:39:47.872 [Pipeline] // catchError 00:39:47.884 [Pipeline] sh 00:39:48.173 + logger -p user.info -t JENKINS-CI 00:39:48.183 [Pipeline] } 00:39:48.196 [Pipeline] // stage 00:39:48.201 [Pipeline] } 00:39:48.215 [Pipeline] // node 00:39:48.221 [Pipeline] End of Pipeline 00:39:48.254 Finished: SUCCESS